00:00:00.000 Started by upstream project "autotest-per-patch" build number 130935 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.071 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.071 The recommended git tool is: git 00:00:00.072 using credential 00000000-0000-0000-0000-000000000002 00:00:00.073 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.129 Fetching changes from the remote Git repository 00:00:00.131 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.199 Using shallow fetch with depth 1 00:00:00.199 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.199 > git --version # timeout=10 00:00:00.258 > git --version # 'git version 2.39.2' 00:00:00.258 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.291 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.291 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.855 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.867 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.879 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:05.879 > git config core.sparsecheckout # timeout=10 00:00:05.891 > git read-tree -mu HEAD # timeout=10 00:00:05.907 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:05.926 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:05.926 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:06.034 [Pipeline] Start of Pipeline 00:00:06.047 [Pipeline] library 00:00:06.048 Loading library shm_lib@master 00:00:06.048 Library shm_lib@master is cached. Copying from home. 00:00:06.066 [Pipeline] node 00:00:06.077 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:00:06.079 [Pipeline] { 00:00:06.088 [Pipeline] catchError 00:00:06.089 [Pipeline] { 00:00:06.101 [Pipeline] wrap 00:00:06.109 [Pipeline] { 00:00:06.115 [Pipeline] stage 00:00:06.117 [Pipeline] { (Prologue) 00:00:06.130 [Pipeline] echo 00:00:06.131 Node: VM-host-SM17 00:00:06.135 [Pipeline] cleanWs 00:00:06.145 [WS-CLEANUP] Deleting project workspace... 00:00:06.145 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.150 [WS-CLEANUP] done 00:00:06.334 [Pipeline] setCustomBuildProperty 00:00:06.418 [Pipeline] httpRequest 00:00:06.774 [Pipeline] echo 00:00:06.776 Sorcerer 10.211.164.101 is alive 00:00:06.785 [Pipeline] retry 00:00:06.786 [Pipeline] { 00:00:06.797 [Pipeline] httpRequest 00:00:06.800 HttpMethod: GET 00:00:06.801 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:06.801 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:06.802 Response Code: HTTP/1.1 200 OK 00:00:06.803 Success: Status code 200 is in the accepted range: 200,404 00:00:06.803 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:07.541 [Pipeline] } 00:00:07.557 [Pipeline] // retry 00:00:07.566 [Pipeline] sh 00:00:07.846 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:07.858 [Pipeline] httpRequest 00:00:08.219 [Pipeline] echo 00:00:08.220 Sorcerer 10.211.164.101 is alive 00:00:08.226 [Pipeline] retry 00:00:08.227 [Pipeline] { 00:00:08.236 [Pipeline] httpRequest 00:00:08.239 HttpMethod: GET 00:00:08.240 URL: http://10.211.164.101/packages/spdk_3c49040782a29109e63799cfa9442ad547a3ed8d.tar.gz 00:00:08.240 Sending request to url: http://10.211.164.101/packages/spdk_3c49040782a29109e63799cfa9442ad547a3ed8d.tar.gz 00:00:08.241 Response Code: HTTP/1.1 200 OK 00:00:08.242 Success: Status code 200 is in the accepted range: 200,404 00:00:08.242 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk_3c49040782a29109e63799cfa9442ad547a3ed8d.tar.gz 00:00:31.656 [Pipeline] } 00:00:31.674 [Pipeline] // retry 00:00:31.681 [Pipeline] sh 00:00:31.962 + tar --no-same-owner -xf spdk_3c49040782a29109e63799cfa9442ad547a3ed8d.tar.gz 00:00:35.259 [Pipeline] sh 00:00:35.629 + git -C spdk log --oneline -n5 00:00:35.629 3c4904078 lib/reduce: unlink meta file 00:00:35.629 92108e0a2 fsdev/aio: add support for null IOs 00:00:35.629 dcdab59d3 lib/reduce: Check return code of read superblock 00:00:35.629 95d9d27f7 bdev/nvme: controller failover/multipath doc change 00:00:35.629 f366dac4a bdev/nvme: removed 'multipath' param from spdk_bdev_nvme_create() 00:00:35.648 [Pipeline] writeFile 00:00:35.663 [Pipeline] sh 00:00:35.945 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:35.956 [Pipeline] sh 00:00:36.237 + cat autorun-spdk.conf 00:00:36.237 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:36.237 SPDK_TEST_NVMF=1 00:00:36.237 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:36.237 SPDK_TEST_URING=1 00:00:36.237 SPDK_TEST_USDT=1 00:00:36.237 SPDK_RUN_UBSAN=1 00:00:36.237 NET_TYPE=virt 00:00:36.237 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:36.244 RUN_NIGHTLY=0 00:00:36.246 [Pipeline] } 00:00:36.259 [Pipeline] // stage 00:00:36.274 [Pipeline] stage 00:00:36.277 [Pipeline] { (Run VM) 00:00:36.289 [Pipeline] sh 00:00:36.570 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:36.570 + echo 'Start stage prepare_nvme.sh' 00:00:36.570 Start stage prepare_nvme.sh 00:00:36.570 + [[ -n 5 ]] 00:00:36.570 + disk_prefix=ex5 00:00:36.570 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 ]] 00:00:36.570 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf ]] 00:00:36.570 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf 00:00:36.570 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:36.570 ++ SPDK_TEST_NVMF=1 00:00:36.570 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:36.570 ++ SPDK_TEST_URING=1 00:00:36.570 ++ SPDK_TEST_USDT=1 00:00:36.570 ++ SPDK_RUN_UBSAN=1 00:00:36.570 ++ NET_TYPE=virt 00:00:36.570 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:36.570 ++ RUN_NIGHTLY=0 00:00:36.570 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:00:36.570 + nvme_files=() 00:00:36.570 + declare -A nvme_files 00:00:36.570 + backend_dir=/var/lib/libvirt/images/backends 00:00:36.570 + nvme_files['nvme.img']=5G 00:00:36.570 + nvme_files['nvme-cmb.img']=5G 00:00:36.570 + nvme_files['nvme-multi0.img']=4G 00:00:36.570 + nvme_files['nvme-multi1.img']=4G 00:00:36.570 + nvme_files['nvme-multi2.img']=4G 00:00:36.570 + nvme_files['nvme-openstack.img']=8G 00:00:36.570 + nvme_files['nvme-zns.img']=5G 00:00:36.570 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:36.570 + (( SPDK_TEST_FTL == 1 )) 00:00:36.570 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:36.570 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:36.570 + for nvme in "${!nvme_files[@]}" 00:00:36.570 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:00:36.570 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:36.570 + for nvme in "${!nvme_files[@]}" 00:00:36.570 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:00:36.570 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:36.570 + for nvme in "${!nvme_files[@]}" 00:00:36.570 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:00:36.570 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:36.570 + for nvme in "${!nvme_files[@]}" 00:00:36.570 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:00:36.570 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:36.570 + for nvme in "${!nvme_files[@]}" 00:00:36.570 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:00:36.570 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:36.570 + for nvme in "${!nvme_files[@]}" 00:00:36.570 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:00:36.570 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:36.570 + for nvme in "${!nvme_files[@]}" 00:00:36.570 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:00:37.946 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:37.946 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:00:37.946 + echo 'End stage prepare_nvme.sh' 00:00:37.947 End stage prepare_nvme.sh 00:00:37.958 [Pipeline] sh 00:00:38.238 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:38.238 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:00:38.238 00:00:38.238 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant 00:00:38.238 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk 00:00:38.238 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:00:38.238 HELP=0 00:00:38.238 DRY_RUN=0 00:00:38.238 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:00:38.238 NVME_DISKS_TYPE=nvme,nvme, 00:00:38.238 NVME_AUTO_CREATE=0 00:00:38.238 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:00:38.238 NVME_CMB=,, 00:00:38.238 NVME_PMR=,, 00:00:38.238 NVME_ZNS=,, 00:00:38.238 NVME_MS=,, 00:00:38.238 NVME_FDP=,, 00:00:38.238 SPDK_VAGRANT_DISTRO=fedora39 00:00:38.238 SPDK_VAGRANT_VMCPU=10 00:00:38.238 SPDK_VAGRANT_VMRAM=12288 00:00:38.238 SPDK_VAGRANT_PROVIDER=libvirt 00:00:38.238 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:38.238 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:38.238 SPDK_OPENSTACK_NETWORK=0 00:00:38.238 VAGRANT_PACKAGE_BOX=0 00:00:38.238 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:38.238 FORCE_DISTRO=true 00:00:38.238 VAGRANT_BOX_VERSION= 00:00:38.238 EXTRA_VAGRANTFILES= 00:00:38.238 NIC_MODEL=e1000 00:00:38.238 00:00:38.238 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt' 00:00:38.238 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:00:40.772 Bringing machine 'default' up with 'libvirt' provider... 00:00:41.350 ==> default: Creating image (snapshot of base box volume). 00:00:41.610 ==> default: Creating domain with the following settings... 00:00:41.610 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1728443004_ef0e66f61ee0e2cd95c6 00:00:41.610 ==> default: -- Domain type: kvm 00:00:41.610 ==> default: -- Cpus: 10 00:00:41.610 ==> default: -- Feature: acpi 00:00:41.610 ==> default: -- Feature: apic 00:00:41.610 ==> default: -- Feature: pae 00:00:41.610 ==> default: -- Memory: 12288M 00:00:41.610 ==> default: -- Memory Backing: hugepages: 00:00:41.610 ==> default: -- Management MAC: 00:00:41.610 ==> default: -- Loader: 00:00:41.610 ==> default: -- Nvram: 00:00:41.610 ==> default: -- Base box: spdk/fedora39 00:00:41.610 ==> default: -- Storage pool: default 00:00:41.610 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1728443004_ef0e66f61ee0e2cd95c6.img (20G) 00:00:41.610 ==> default: -- Volume Cache: default 00:00:41.610 ==> default: -- Kernel: 00:00:41.610 ==> default: -- Initrd: 00:00:41.610 ==> default: -- Graphics Type: vnc 00:00:41.610 ==> default: -- Graphics Port: -1 00:00:41.610 ==> default: -- Graphics IP: 127.0.0.1 00:00:41.610 ==> default: -- Graphics Password: Not defined 00:00:41.610 ==> default: -- Video Type: cirrus 00:00:41.610 ==> default: -- Video VRAM: 9216 00:00:41.610 ==> default: -- Sound Type: 00:00:41.610 ==> default: -- Keymap: en-us 00:00:41.610 ==> default: -- TPM Path: 00:00:41.610 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:41.610 ==> default: -- Command line args: 00:00:41.610 ==> default: -> value=-device, 00:00:41.610 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:41.610 ==> default: -> value=-drive, 00:00:41.610 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:00:41.610 ==> default: -> value=-device, 00:00:41.610 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:41.610 ==> default: -> value=-device, 00:00:41.610 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:41.610 ==> default: -> value=-drive, 00:00:41.610 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:41.610 ==> default: -> value=-device, 00:00:41.610 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:41.610 ==> default: -> value=-drive, 00:00:41.610 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:41.610 ==> default: -> value=-device, 00:00:41.610 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:41.610 ==> default: -> value=-drive, 00:00:41.610 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:41.610 ==> default: -> value=-device, 00:00:41.610 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:41.610 ==> default: Creating shared folders metadata... 00:00:41.610 ==> default: Starting domain. 00:00:42.987 ==> default: Waiting for domain to get an IP address... 00:01:01.068 ==> default: Waiting for SSH to become available... 00:01:01.068 ==> default: Configuring and enabling network interfaces... 00:01:03.598 default: SSH address: 192.168.121.123:22 00:01:03.598 default: SSH username: vagrant 00:01:03.598 default: SSH auth method: private key 00:01:05.500 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:13.616 ==> default: Mounting SSHFS shared folder... 00:01:14.993 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:14.993 ==> default: Checking Mount.. 00:01:15.931 ==> default: Folder Successfully Mounted! 00:01:15.931 ==> default: Running provisioner: file... 00:01:16.871 default: ~/.gitconfig => .gitconfig 00:01:17.437 00:01:17.437 SUCCESS! 00:01:17.437 00:01:17.437 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:01:17.437 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:17.437 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:01:17.437 00:01:17.446 [Pipeline] } 00:01:17.460 [Pipeline] // stage 00:01:17.466 [Pipeline] dir 00:01:17.466 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt 00:01:17.467 [Pipeline] { 00:01:17.476 [Pipeline] catchError 00:01:17.478 [Pipeline] { 00:01:17.487 [Pipeline] sh 00:01:17.761 + vagrant ssh-config --host vagrant 00:01:17.761 + sed -ne /^Host/,$p 00:01:17.761 + tee ssh_conf 00:01:21.946 Host vagrant 00:01:21.946 HostName 192.168.121.123 00:01:21.946 User vagrant 00:01:21.946 Port 22 00:01:21.946 UserKnownHostsFile /dev/null 00:01:21.946 StrictHostKeyChecking no 00:01:21.946 PasswordAuthentication no 00:01:21.946 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:21.946 IdentitiesOnly yes 00:01:21.946 LogLevel FATAL 00:01:21.946 ForwardAgent yes 00:01:21.946 ForwardX11 yes 00:01:21.946 00:01:21.960 [Pipeline] withEnv 00:01:21.962 [Pipeline] { 00:01:21.976 [Pipeline] sh 00:01:22.256 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:22.256 source /etc/os-release 00:01:22.256 [[ -e /image.version ]] && img=$(< /image.version) 00:01:22.256 # Minimal, systemd-like check. 00:01:22.256 if [[ -e /.dockerenv ]]; then 00:01:22.256 # Clear garbage from the node's name: 00:01:22.256 # agt-er_autotest_547-896 -> autotest_547-896 00:01:22.256 # $HOSTNAME is the actual container id 00:01:22.256 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:22.256 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:22.256 # We can assume this is a mount from a host where container is running, 00:01:22.256 # so fetch its hostname to easily identify the target swarm worker. 00:01:22.256 container="$(< /etc/hostname) ($agent)" 00:01:22.256 else 00:01:22.256 # Fallback 00:01:22.256 container=$agent 00:01:22.256 fi 00:01:22.256 fi 00:01:22.256 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:22.256 00:01:22.526 [Pipeline] } 00:01:22.542 [Pipeline] // withEnv 00:01:22.550 [Pipeline] setCustomBuildProperty 00:01:22.566 [Pipeline] stage 00:01:22.568 [Pipeline] { (Tests) 00:01:22.586 [Pipeline] sh 00:01:22.866 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:23.140 [Pipeline] sh 00:01:23.419 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:23.755 [Pipeline] timeout 00:01:23.755 Timeout set to expire in 1 hr 0 min 00:01:23.757 [Pipeline] { 00:01:23.771 [Pipeline] sh 00:01:24.050 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:24.618 HEAD is now at 3c4904078 lib/reduce: unlink meta file 00:01:24.632 [Pipeline] sh 00:01:24.913 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:25.185 [Pipeline] sh 00:01:25.465 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:25.740 [Pipeline] sh 00:01:26.019 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:26.278 ++ readlink -f spdk_repo 00:01:26.278 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:26.278 + [[ -n /home/vagrant/spdk_repo ]] 00:01:26.278 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:26.278 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:26.278 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:26.278 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:26.278 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:26.278 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:26.278 + cd /home/vagrant/spdk_repo 00:01:26.278 + source /etc/os-release 00:01:26.278 ++ NAME='Fedora Linux' 00:01:26.278 ++ VERSION='39 (Cloud Edition)' 00:01:26.278 ++ ID=fedora 00:01:26.278 ++ VERSION_ID=39 00:01:26.278 ++ VERSION_CODENAME= 00:01:26.278 ++ PLATFORM_ID=platform:f39 00:01:26.278 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:26.278 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:26.278 ++ LOGO=fedora-logo-icon 00:01:26.278 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:26.278 ++ HOME_URL=https://fedoraproject.org/ 00:01:26.278 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:26.278 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:26.278 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:26.278 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:26.278 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:26.278 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:26.278 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:26.278 ++ SUPPORT_END=2024-11-12 00:01:26.278 ++ VARIANT='Cloud Edition' 00:01:26.278 ++ VARIANT_ID=cloud 00:01:26.278 + uname -a 00:01:26.278 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:26.278 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:26.845 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:26.845 Hugepages 00:01:26.845 node hugesize free / total 00:01:26.845 node0 1048576kB 0 / 0 00:01:26.845 node0 2048kB 0 / 0 00:01:26.845 00:01:26.845 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:26.845 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:26.845 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:26.845 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:26.845 + rm -f /tmp/spdk-ld-path 00:01:26.845 + source autorun-spdk.conf 00:01:26.845 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.846 ++ SPDK_TEST_NVMF=1 00:01:26.846 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:26.846 ++ SPDK_TEST_URING=1 00:01:26.846 ++ SPDK_TEST_USDT=1 00:01:26.846 ++ SPDK_RUN_UBSAN=1 00:01:26.846 ++ NET_TYPE=virt 00:01:26.846 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:26.846 ++ RUN_NIGHTLY=0 00:01:26.846 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:26.846 + [[ -n '' ]] 00:01:26.846 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:26.846 + for M in /var/spdk/build-*-manifest.txt 00:01:26.846 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:26.846 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:26.846 + for M in /var/spdk/build-*-manifest.txt 00:01:26.846 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:26.846 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:26.846 + for M in /var/spdk/build-*-manifest.txt 00:01:26.846 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:26.846 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:26.846 ++ uname 00:01:26.846 + [[ Linux == \L\i\n\u\x ]] 00:01:26.846 + sudo dmesg -T 00:01:26.846 + sudo dmesg --clear 00:01:26.846 + dmesg_pid=5202 00:01:26.846 + sudo dmesg -Tw 00:01:26.846 + [[ Fedora Linux == FreeBSD ]] 00:01:26.846 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:26.846 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:26.846 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:26.846 + [[ -x /usr/src/fio-static/fio ]] 00:01:26.846 + export FIO_BIN=/usr/src/fio-static/fio 00:01:26.846 + FIO_BIN=/usr/src/fio-static/fio 00:01:26.846 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:26.846 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:26.846 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:26.846 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:26.846 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:26.846 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:26.846 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:26.846 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:26.846 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:26.846 Test configuration: 00:01:26.846 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.846 SPDK_TEST_NVMF=1 00:01:26.846 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:26.846 SPDK_TEST_URING=1 00:01:26.846 SPDK_TEST_USDT=1 00:01:26.846 SPDK_RUN_UBSAN=1 00:01:26.846 NET_TYPE=virt 00:01:26.846 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:26.846 RUN_NIGHTLY=0 03:04:10 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:01:26.846 03:04:10 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:26.846 03:04:10 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:26.846 03:04:10 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:26.846 03:04:10 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:26.846 03:04:10 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:26.846 03:04:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.846 03:04:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.846 03:04:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.846 03:04:10 -- paths/export.sh@5 -- $ export PATH 00:01:26.846 03:04:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.105 03:04:10 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:27.105 03:04:10 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:27.105 03:04:10 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728443050.XXXXXX 00:01:27.105 03:04:10 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728443050.Dowv1c 00:01:27.105 03:04:10 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:27.105 03:04:10 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:27.105 03:04:10 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:27.105 03:04:10 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:27.105 03:04:10 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:27.105 03:04:10 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:27.105 03:04:10 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:27.105 03:04:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.105 03:04:10 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:27.105 03:04:10 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:27.105 03:04:10 -- pm/common@17 -- $ local monitor 00:01:27.105 03:04:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.105 03:04:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.105 03:04:10 -- pm/common@25 -- $ sleep 1 00:01:27.105 03:04:10 -- pm/common@21 -- $ date +%s 00:01:27.105 03:04:10 -- pm/common@21 -- $ date +%s 00:01:27.105 03:04:10 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728443050 00:01:27.105 03:04:10 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728443050 00:01:27.105 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728443050_collect-vmstat.pm.log 00:01:27.105 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728443050_collect-cpu-load.pm.log 00:01:28.041 03:04:11 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:28.041 03:04:11 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:28.041 03:04:11 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:28.041 03:04:11 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:28.041 03:04:11 -- spdk/autobuild.sh@16 -- $ date -u 00:01:28.041 Wed Oct 9 03:04:11 AM UTC 2024 00:01:28.041 03:04:11 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:28.041 v25.01-pre-42-g3c4904078 00:01:28.041 03:04:11 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:28.041 03:04:11 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:28.041 03:04:11 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:28.041 03:04:11 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:28.041 03:04:11 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:28.041 03:04:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.041 ************************************ 00:01:28.041 START TEST ubsan 00:01:28.041 ************************************ 00:01:28.041 using ubsan 00:01:28.041 03:04:11 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:28.041 00:01:28.041 real 0m0.000s 00:01:28.041 user 0m0.000s 00:01:28.041 sys 0m0.000s 00:01:28.041 03:04:11 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:28.041 ************************************ 00:01:28.041 END TEST ubsan 00:01:28.041 03:04:11 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:28.041 ************************************ 00:01:28.041 03:04:11 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:28.041 03:04:11 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:28.041 03:04:11 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:28.041 03:04:11 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:28.041 03:04:11 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:28.041 03:04:11 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:28.041 03:04:11 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:28.041 03:04:11 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:28.041 03:04:11 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:28.299 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:28.299 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:28.558 Using 'verbs' RDMA provider 00:01:41.730 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:56.628 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:56.628 Creating mk/config.mk...done. 00:01:56.628 Creating mk/cc.flags.mk...done. 00:01:56.628 Type 'make' to build. 00:01:56.628 03:04:38 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:56.628 03:04:38 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:56.628 03:04:38 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:56.628 03:04:38 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.628 ************************************ 00:01:56.628 START TEST make 00:01:56.628 ************************************ 00:01:56.628 03:04:38 make -- common/autotest_common.sh@1125 -- $ make -j10 00:01:56.628 make[1]: Nothing to be done for 'all'. 00:02:08.840 The Meson build system 00:02:08.840 Version: 1.5.0 00:02:08.840 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:08.840 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:08.840 Build type: native build 00:02:08.840 Program cat found: YES (/usr/bin/cat) 00:02:08.840 Project name: DPDK 00:02:08.840 Project version: 24.03.0 00:02:08.840 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:08.840 C linker for the host machine: cc ld.bfd 2.40-14 00:02:08.840 Host machine cpu family: x86_64 00:02:08.840 Host machine cpu: x86_64 00:02:08.840 Message: ## Building in Developer Mode ## 00:02:08.840 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:08.840 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:08.840 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:08.840 Program python3 found: YES (/usr/bin/python3) 00:02:08.840 Program cat found: YES (/usr/bin/cat) 00:02:08.840 Compiler for C supports arguments -march=native: YES 00:02:08.840 Checking for size of "void *" : 8 00:02:08.840 Checking for size of "void *" : 8 (cached) 00:02:08.840 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:08.840 Library m found: YES 00:02:08.840 Library numa found: YES 00:02:08.840 Has header "numaif.h" : YES 00:02:08.840 Library fdt found: NO 00:02:08.840 Library execinfo found: NO 00:02:08.840 Has header "execinfo.h" : YES 00:02:08.840 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:08.840 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:08.840 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:08.840 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:08.840 Run-time dependency openssl found: YES 3.1.1 00:02:08.840 Run-time dependency libpcap found: YES 1.10.4 00:02:08.840 Has header "pcap.h" with dependency libpcap: YES 00:02:08.840 Compiler for C supports arguments -Wcast-qual: YES 00:02:08.840 Compiler for C supports arguments -Wdeprecated: YES 00:02:08.840 Compiler for C supports arguments -Wformat: YES 00:02:08.840 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:08.840 Compiler for C supports arguments -Wformat-security: NO 00:02:08.840 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:08.840 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:08.840 Compiler for C supports arguments -Wnested-externs: YES 00:02:08.840 Compiler for C supports arguments -Wold-style-definition: YES 00:02:08.840 Compiler for C supports arguments -Wpointer-arith: YES 00:02:08.840 Compiler for C supports arguments -Wsign-compare: YES 00:02:08.840 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:08.840 Compiler for C supports arguments -Wundef: YES 00:02:08.840 Compiler for C supports arguments -Wwrite-strings: YES 00:02:08.841 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:08.841 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:08.841 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:08.841 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:08.841 Program objdump found: YES (/usr/bin/objdump) 00:02:08.841 Compiler for C supports arguments -mavx512f: YES 00:02:08.841 Checking if "AVX512 checking" compiles: YES 00:02:08.841 Fetching value of define "__SSE4_2__" : 1 00:02:08.841 Fetching value of define "__AES__" : 1 00:02:08.841 Fetching value of define "__AVX__" : 1 00:02:08.841 Fetching value of define "__AVX2__" : 1 00:02:08.841 Fetching value of define "__AVX512BW__" : (undefined) 00:02:08.841 Fetching value of define "__AVX512CD__" : (undefined) 00:02:08.841 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:08.841 Fetching value of define "__AVX512F__" : (undefined) 00:02:08.841 Fetching value of define "__AVX512VL__" : (undefined) 00:02:08.841 Fetching value of define "__PCLMUL__" : 1 00:02:08.841 Fetching value of define "__RDRND__" : 1 00:02:08.841 Fetching value of define "__RDSEED__" : 1 00:02:08.841 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:08.841 Fetching value of define "__znver1__" : (undefined) 00:02:08.841 Fetching value of define "__znver2__" : (undefined) 00:02:08.841 Fetching value of define "__znver3__" : (undefined) 00:02:08.841 Fetching value of define "__znver4__" : (undefined) 00:02:08.841 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:08.841 Message: lib/log: Defining dependency "log" 00:02:08.841 Message: lib/kvargs: Defining dependency "kvargs" 00:02:08.841 Message: lib/telemetry: Defining dependency "telemetry" 00:02:08.841 Checking for function "getentropy" : NO 00:02:08.841 Message: lib/eal: Defining dependency "eal" 00:02:08.841 Message: lib/ring: Defining dependency "ring" 00:02:08.841 Message: lib/rcu: Defining dependency "rcu" 00:02:08.841 Message: lib/mempool: Defining dependency "mempool" 00:02:08.841 Message: lib/mbuf: Defining dependency "mbuf" 00:02:08.841 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:08.841 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:08.841 Compiler for C supports arguments -mpclmul: YES 00:02:08.841 Compiler for C supports arguments -maes: YES 00:02:08.841 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:08.841 Compiler for C supports arguments -mavx512bw: YES 00:02:08.841 Compiler for C supports arguments -mavx512dq: YES 00:02:08.841 Compiler for C supports arguments -mavx512vl: YES 00:02:08.841 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:08.841 Compiler for C supports arguments -mavx2: YES 00:02:08.841 Compiler for C supports arguments -mavx: YES 00:02:08.841 Message: lib/net: Defining dependency "net" 00:02:08.841 Message: lib/meter: Defining dependency "meter" 00:02:08.841 Message: lib/ethdev: Defining dependency "ethdev" 00:02:08.841 Message: lib/pci: Defining dependency "pci" 00:02:08.841 Message: lib/cmdline: Defining dependency "cmdline" 00:02:08.841 Message: lib/hash: Defining dependency "hash" 00:02:08.841 Message: lib/timer: Defining dependency "timer" 00:02:08.841 Message: lib/compressdev: Defining dependency "compressdev" 00:02:08.841 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:08.841 Message: lib/dmadev: Defining dependency "dmadev" 00:02:08.841 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:08.841 Message: lib/power: Defining dependency "power" 00:02:08.841 Message: lib/reorder: Defining dependency "reorder" 00:02:08.841 Message: lib/security: Defining dependency "security" 00:02:08.841 Has header "linux/userfaultfd.h" : YES 00:02:08.841 Has header "linux/vduse.h" : YES 00:02:08.841 Message: lib/vhost: Defining dependency "vhost" 00:02:08.841 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:08.841 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:08.841 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:08.841 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:08.841 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:08.841 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:08.841 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:08.841 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:08.841 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:08.841 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:08.841 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:08.841 Configuring doxy-api-html.conf using configuration 00:02:08.841 Configuring doxy-api-man.conf using configuration 00:02:08.841 Program mandb found: YES (/usr/bin/mandb) 00:02:08.841 Program sphinx-build found: NO 00:02:08.841 Configuring rte_build_config.h using configuration 00:02:08.841 Message: 00:02:08.841 ================= 00:02:08.841 Applications Enabled 00:02:08.841 ================= 00:02:08.841 00:02:08.841 apps: 00:02:08.841 00:02:08.841 00:02:08.841 Message: 00:02:08.841 ================= 00:02:08.841 Libraries Enabled 00:02:08.841 ================= 00:02:08.841 00:02:08.841 libs: 00:02:08.841 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:08.841 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:08.841 cryptodev, dmadev, power, reorder, security, vhost, 00:02:08.841 00:02:08.841 Message: 00:02:08.841 =============== 00:02:08.841 Drivers Enabled 00:02:08.841 =============== 00:02:08.841 00:02:08.841 common: 00:02:08.841 00:02:08.841 bus: 00:02:08.841 pci, vdev, 00:02:08.841 mempool: 00:02:08.841 ring, 00:02:08.841 dma: 00:02:08.841 00:02:08.841 net: 00:02:08.841 00:02:08.841 crypto: 00:02:08.841 00:02:08.841 compress: 00:02:08.841 00:02:08.841 vdpa: 00:02:08.841 00:02:08.841 00:02:08.841 Message: 00:02:08.841 ================= 00:02:08.841 Content Skipped 00:02:08.841 ================= 00:02:08.841 00:02:08.841 apps: 00:02:08.841 dumpcap: explicitly disabled via build config 00:02:08.841 graph: explicitly disabled via build config 00:02:08.841 pdump: explicitly disabled via build config 00:02:08.841 proc-info: explicitly disabled via build config 00:02:08.841 test-acl: explicitly disabled via build config 00:02:08.841 test-bbdev: explicitly disabled via build config 00:02:08.841 test-cmdline: explicitly disabled via build config 00:02:08.841 test-compress-perf: explicitly disabled via build config 00:02:08.841 test-crypto-perf: explicitly disabled via build config 00:02:08.841 test-dma-perf: explicitly disabled via build config 00:02:08.841 test-eventdev: explicitly disabled via build config 00:02:08.841 test-fib: explicitly disabled via build config 00:02:08.841 test-flow-perf: explicitly disabled via build config 00:02:08.841 test-gpudev: explicitly disabled via build config 00:02:08.841 test-mldev: explicitly disabled via build config 00:02:08.842 test-pipeline: explicitly disabled via build config 00:02:08.842 test-pmd: explicitly disabled via build config 00:02:08.842 test-regex: explicitly disabled via build config 00:02:08.842 test-sad: explicitly disabled via build config 00:02:08.842 test-security-perf: explicitly disabled via build config 00:02:08.842 00:02:08.842 libs: 00:02:08.842 argparse: explicitly disabled via build config 00:02:08.842 metrics: explicitly disabled via build config 00:02:08.842 acl: explicitly disabled via build config 00:02:08.842 bbdev: explicitly disabled via build config 00:02:08.842 bitratestats: explicitly disabled via build config 00:02:08.842 bpf: explicitly disabled via build config 00:02:08.842 cfgfile: explicitly disabled via build config 00:02:08.842 distributor: explicitly disabled via build config 00:02:08.842 efd: explicitly disabled via build config 00:02:08.842 eventdev: explicitly disabled via build config 00:02:08.842 dispatcher: explicitly disabled via build config 00:02:08.842 gpudev: explicitly disabled via build config 00:02:08.842 gro: explicitly disabled via build config 00:02:08.842 gso: explicitly disabled via build config 00:02:08.842 ip_frag: explicitly disabled via build config 00:02:08.842 jobstats: explicitly disabled via build config 00:02:08.842 latencystats: explicitly disabled via build config 00:02:08.842 lpm: explicitly disabled via build config 00:02:08.842 member: explicitly disabled via build config 00:02:08.842 pcapng: explicitly disabled via build config 00:02:08.842 rawdev: explicitly disabled via build config 00:02:08.842 regexdev: explicitly disabled via build config 00:02:08.842 mldev: explicitly disabled via build config 00:02:08.842 rib: explicitly disabled via build config 00:02:08.842 sched: explicitly disabled via build config 00:02:08.842 stack: explicitly disabled via build config 00:02:08.842 ipsec: explicitly disabled via build config 00:02:08.842 pdcp: explicitly disabled via build config 00:02:08.842 fib: explicitly disabled via build config 00:02:08.842 port: explicitly disabled via build config 00:02:08.842 pdump: explicitly disabled via build config 00:02:08.842 table: explicitly disabled via build config 00:02:08.842 pipeline: explicitly disabled via build config 00:02:08.842 graph: explicitly disabled via build config 00:02:08.842 node: explicitly disabled via build config 00:02:08.842 00:02:08.842 drivers: 00:02:08.842 common/cpt: not in enabled drivers build config 00:02:08.842 common/dpaax: not in enabled drivers build config 00:02:08.842 common/iavf: not in enabled drivers build config 00:02:08.842 common/idpf: not in enabled drivers build config 00:02:08.842 common/ionic: not in enabled drivers build config 00:02:08.842 common/mvep: not in enabled drivers build config 00:02:08.842 common/octeontx: not in enabled drivers build config 00:02:08.842 bus/auxiliary: not in enabled drivers build config 00:02:08.842 bus/cdx: not in enabled drivers build config 00:02:08.842 bus/dpaa: not in enabled drivers build config 00:02:08.842 bus/fslmc: not in enabled drivers build config 00:02:08.842 bus/ifpga: not in enabled drivers build config 00:02:08.842 bus/platform: not in enabled drivers build config 00:02:08.842 bus/uacce: not in enabled drivers build config 00:02:08.842 bus/vmbus: not in enabled drivers build config 00:02:08.842 common/cnxk: not in enabled drivers build config 00:02:08.842 common/mlx5: not in enabled drivers build config 00:02:08.842 common/nfp: not in enabled drivers build config 00:02:08.842 common/nitrox: not in enabled drivers build config 00:02:08.842 common/qat: not in enabled drivers build config 00:02:08.842 common/sfc_efx: not in enabled drivers build config 00:02:08.842 mempool/bucket: not in enabled drivers build config 00:02:08.842 mempool/cnxk: not in enabled drivers build config 00:02:08.842 mempool/dpaa: not in enabled drivers build config 00:02:08.842 mempool/dpaa2: not in enabled drivers build config 00:02:08.842 mempool/octeontx: not in enabled drivers build config 00:02:08.842 mempool/stack: not in enabled drivers build config 00:02:08.842 dma/cnxk: not in enabled drivers build config 00:02:08.842 dma/dpaa: not in enabled drivers build config 00:02:08.842 dma/dpaa2: not in enabled drivers build config 00:02:08.842 dma/hisilicon: not in enabled drivers build config 00:02:08.842 dma/idxd: not in enabled drivers build config 00:02:08.842 dma/ioat: not in enabled drivers build config 00:02:08.842 dma/skeleton: not in enabled drivers build config 00:02:08.842 net/af_packet: not in enabled drivers build config 00:02:08.842 net/af_xdp: not in enabled drivers build config 00:02:08.842 net/ark: not in enabled drivers build config 00:02:08.842 net/atlantic: not in enabled drivers build config 00:02:08.842 net/avp: not in enabled drivers build config 00:02:08.842 net/axgbe: not in enabled drivers build config 00:02:08.842 net/bnx2x: not in enabled drivers build config 00:02:08.842 net/bnxt: not in enabled drivers build config 00:02:08.842 net/bonding: not in enabled drivers build config 00:02:08.842 net/cnxk: not in enabled drivers build config 00:02:08.842 net/cpfl: not in enabled drivers build config 00:02:08.842 net/cxgbe: not in enabled drivers build config 00:02:08.842 net/dpaa: not in enabled drivers build config 00:02:08.842 net/dpaa2: not in enabled drivers build config 00:02:08.842 net/e1000: not in enabled drivers build config 00:02:08.842 net/ena: not in enabled drivers build config 00:02:08.842 net/enetc: not in enabled drivers build config 00:02:08.842 net/enetfec: not in enabled drivers build config 00:02:08.842 net/enic: not in enabled drivers build config 00:02:08.842 net/failsafe: not in enabled drivers build config 00:02:08.842 net/fm10k: not in enabled drivers build config 00:02:08.842 net/gve: not in enabled drivers build config 00:02:08.842 net/hinic: not in enabled drivers build config 00:02:08.842 net/hns3: not in enabled drivers build config 00:02:08.842 net/i40e: not in enabled drivers build config 00:02:08.842 net/iavf: not in enabled drivers build config 00:02:08.842 net/ice: not in enabled drivers build config 00:02:08.842 net/idpf: not in enabled drivers build config 00:02:08.842 net/igc: not in enabled drivers build config 00:02:08.842 net/ionic: not in enabled drivers build config 00:02:08.842 net/ipn3ke: not in enabled drivers build config 00:02:08.842 net/ixgbe: not in enabled drivers build config 00:02:08.842 net/mana: not in enabled drivers build config 00:02:08.842 net/memif: not in enabled drivers build config 00:02:08.842 net/mlx4: not in enabled drivers build config 00:02:08.842 net/mlx5: not in enabled drivers build config 00:02:08.842 net/mvneta: not in enabled drivers build config 00:02:08.842 net/mvpp2: not in enabled drivers build config 00:02:08.842 net/netvsc: not in enabled drivers build config 00:02:08.842 net/nfb: not in enabled drivers build config 00:02:08.842 net/nfp: not in enabled drivers build config 00:02:08.842 net/ngbe: not in enabled drivers build config 00:02:08.842 net/null: not in enabled drivers build config 00:02:08.842 net/octeontx: not in enabled drivers build config 00:02:08.842 net/octeon_ep: not in enabled drivers build config 00:02:08.842 net/pcap: not in enabled drivers build config 00:02:08.842 net/pfe: not in enabled drivers build config 00:02:08.842 net/qede: not in enabled drivers build config 00:02:08.843 net/ring: not in enabled drivers build config 00:02:08.843 net/sfc: not in enabled drivers build config 00:02:08.843 net/softnic: not in enabled drivers build config 00:02:08.843 net/tap: not in enabled drivers build config 00:02:08.843 net/thunderx: not in enabled drivers build config 00:02:08.843 net/txgbe: not in enabled drivers build config 00:02:08.843 net/vdev_netvsc: not in enabled drivers build config 00:02:08.843 net/vhost: not in enabled drivers build config 00:02:08.843 net/virtio: not in enabled drivers build config 00:02:08.843 net/vmxnet3: not in enabled drivers build config 00:02:08.843 raw/*: missing internal dependency, "rawdev" 00:02:08.843 crypto/armv8: not in enabled drivers build config 00:02:08.843 crypto/bcmfs: not in enabled drivers build config 00:02:08.843 crypto/caam_jr: not in enabled drivers build config 00:02:08.843 crypto/ccp: not in enabled drivers build config 00:02:08.843 crypto/cnxk: not in enabled drivers build config 00:02:08.843 crypto/dpaa_sec: not in enabled drivers build config 00:02:08.843 crypto/dpaa2_sec: not in enabled drivers build config 00:02:08.843 crypto/ipsec_mb: not in enabled drivers build config 00:02:08.843 crypto/mlx5: not in enabled drivers build config 00:02:08.843 crypto/mvsam: not in enabled drivers build config 00:02:08.843 crypto/nitrox: not in enabled drivers build config 00:02:08.843 crypto/null: not in enabled drivers build config 00:02:08.843 crypto/octeontx: not in enabled drivers build config 00:02:08.843 crypto/openssl: not in enabled drivers build config 00:02:08.843 crypto/scheduler: not in enabled drivers build config 00:02:08.843 crypto/uadk: not in enabled drivers build config 00:02:08.843 crypto/virtio: not in enabled drivers build config 00:02:08.843 compress/isal: not in enabled drivers build config 00:02:08.843 compress/mlx5: not in enabled drivers build config 00:02:08.843 compress/nitrox: not in enabled drivers build config 00:02:08.843 compress/octeontx: not in enabled drivers build config 00:02:08.843 compress/zlib: not in enabled drivers build config 00:02:08.843 regex/*: missing internal dependency, "regexdev" 00:02:08.843 ml/*: missing internal dependency, "mldev" 00:02:08.843 vdpa/ifc: not in enabled drivers build config 00:02:08.843 vdpa/mlx5: not in enabled drivers build config 00:02:08.843 vdpa/nfp: not in enabled drivers build config 00:02:08.843 vdpa/sfc: not in enabled drivers build config 00:02:08.843 event/*: missing internal dependency, "eventdev" 00:02:08.843 baseband/*: missing internal dependency, "bbdev" 00:02:08.843 gpu/*: missing internal dependency, "gpudev" 00:02:08.843 00:02:08.843 00:02:08.843 Build targets in project: 85 00:02:08.843 00:02:08.843 DPDK 24.03.0 00:02:08.843 00:02:08.843 User defined options 00:02:08.843 buildtype : debug 00:02:08.843 default_library : shared 00:02:08.843 libdir : lib 00:02:08.843 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:08.843 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:08.843 c_link_args : 00:02:08.843 cpu_instruction_set: native 00:02:08.843 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:08.843 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:08.843 enable_docs : false 00:02:08.843 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:08.843 enable_kmods : false 00:02:08.843 max_lcores : 128 00:02:08.843 tests : false 00:02:08.843 00:02:08.843 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:08.843 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:08.843 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:08.843 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:08.843 [3/268] Linking static target lib/librte_kvargs.a 00:02:08.843 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:08.843 [5/268] Linking static target lib/librte_log.a 00:02:08.843 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:09.101 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.101 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:09.360 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:09.360 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:09.360 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:09.360 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:09.360 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:09.360 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:09.360 [15/268] Linking static target lib/librte_telemetry.a 00:02:09.360 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:09.360 [17/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.360 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:09.618 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:09.618 [20/268] Linking target lib/librte_log.so.24.1 00:02:09.877 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:09.877 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:09.877 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:10.136 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:10.137 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:10.137 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:10.137 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:10.395 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:10.395 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.395 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:10.395 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:10.395 [32/268] Linking target lib/librte_telemetry.so.24.1 00:02:10.395 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:10.684 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:10.684 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:10.684 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:10.684 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:10.949 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:10.949 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:10.949 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:11.208 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:11.208 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:11.208 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:11.208 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:11.208 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:11.467 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:11.467 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:11.467 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:11.726 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:11.726 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:11.726 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:11.726 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:11.984 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:11.984 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:12.242 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:12.242 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:12.242 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:12.242 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:12.499 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:12.499 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:12.763 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:12.763 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:12.763 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:13.028 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:13.028 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:13.028 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:13.287 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:13.287 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:13.287 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:13.546 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:13.546 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:13.546 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:13.546 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:13.805 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:13.805 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:13.805 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:13.805 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:13.805 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:14.063 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:14.063 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:14.063 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:14.063 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:14.322 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:14.322 [84/268] Linking static target lib/librte_ring.a 00:02:14.322 [85/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:14.322 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:14.322 [87/268] Linking static target lib/librte_rcu.a 00:02:14.580 [88/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:14.580 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:14.580 [90/268] Linking static target lib/librte_eal.a 00:02:14.580 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:14.839 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:14.839 [93/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.839 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:14.839 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:14.839 [96/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.839 [97/268] Linking static target lib/librte_mempool.a 00:02:15.098 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:15.098 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:15.098 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:15.357 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:15.357 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:15.357 [103/268] Linking static target lib/librte_mbuf.a 00:02:15.357 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:15.615 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:15.615 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:15.615 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:15.615 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:15.874 [109/268] Linking static target lib/librte_net.a 00:02:15.874 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:15.874 [111/268] Linking static target lib/librte_meter.a 00:02:16.133 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:16.133 [113/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.133 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:16.133 [115/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.133 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:16.133 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.392 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:16.392 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.959 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:16.959 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:16.959 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:17.282 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:17.282 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:17.540 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:17.540 [126/268] Linking static target lib/librte_pci.a 00:02:17.540 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:17.540 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:17.540 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:17.540 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:17.540 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:17.540 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:17.540 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:17.799 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:17.799 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:17.799 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:17.799 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:17.799 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:17.799 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:17.799 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:17.799 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:17.799 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:17.799 [143/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.799 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:17.799 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:18.058 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:18.317 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:18.317 [148/268] Linking static target lib/librte_cmdline.a 00:02:18.317 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:18.576 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:18.576 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:18.576 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:18.576 [153/268] Linking static target lib/librte_ethdev.a 00:02:18.576 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:18.576 [155/268] Linking static target lib/librte_timer.a 00:02:18.835 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:18.835 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:19.093 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:19.093 [159/268] Linking static target lib/librte_hash.a 00:02:19.093 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:19.352 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:19.352 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.352 [163/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:19.352 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:19.352 [165/268] Linking static target lib/librte_compressdev.a 00:02:19.610 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:19.610 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:19.869 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:19.869 [169/268] Linking static target lib/librte_dmadev.a 00:02:19.869 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:19.869 [171/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:19.869 [172/268] Linking static target lib/librte_cryptodev.a 00:02:19.869 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:20.127 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:20.127 [175/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.127 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:20.385 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.385 [178/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.643 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:20.643 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:20.643 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:20.643 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.643 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:20.901 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:21.160 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:21.160 [186/268] Linking static target lib/librte_power.a 00:02:21.160 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:21.418 [188/268] Linking static target lib/librte_reorder.a 00:02:21.418 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:21.418 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:21.418 [191/268] Linking static target lib/librte_security.a 00:02:21.418 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:21.676 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:21.676 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:21.935 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.502 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.502 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.502 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:22.502 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:22.502 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:22.502 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:22.502 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.069 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:23.069 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:23.069 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:23.327 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:23.327 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:23.327 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:23.327 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:23.327 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:23.585 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:23.585 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:23.585 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:23.585 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:23.585 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:23.585 [216/268] Linking static target drivers/librte_bus_vdev.a 00:02:23.585 [217/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:23.844 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:23.844 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:23.844 [220/268] Linking static target drivers/librte_bus_pci.a 00:02:23.844 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:23.844 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:23.844 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.844 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:24.103 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:24.103 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:24.103 [227/268] Linking static target drivers/librte_mempool_ring.a 00:02:24.363 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.930 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:24.930 [230/268] Linking static target lib/librte_vhost.a 00:02:25.866 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.866 [232/268] Linking target lib/librte_eal.so.24.1 00:02:25.866 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:26.125 [234/268] Linking target lib/librte_ring.so.24.1 00:02:26.125 [235/268] Linking target lib/librte_pci.so.24.1 00:02:26.125 [236/268] Linking target lib/librte_meter.so.24.1 00:02:26.125 [237/268] Linking target lib/librte_timer.so.24.1 00:02:26.125 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:26.125 [239/268] Linking target lib/librte_dmadev.so.24.1 00:02:26.125 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:26.125 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:26.125 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:26.125 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:26.125 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:26.125 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:26.125 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:26.125 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:26.125 [248/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.384 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:26.384 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:26.384 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:26.384 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:26.384 [253/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.643 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:26.643 [255/268] Linking target lib/librte_net.so.24.1 00:02:26.643 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:26.643 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:02:26.643 [258/268] Linking target lib/librte_reorder.so.24.1 00:02:26.643 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:26.643 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:26.643 [261/268] Linking target lib/librte_hash.so.24.1 00:02:26.643 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:26.643 [263/268] Linking target lib/librte_security.so.24.1 00:02:26.643 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:26.903 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:26.904 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:26.904 [267/268] Linking target lib/librte_power.so.24.1 00:02:26.904 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:26.904 INFO: autodetecting backend as ninja 00:02:26.904 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:53.537 CC lib/ut/ut.o 00:02:53.537 CC lib/log/log_flags.o 00:02:53.537 CC lib/ut_mock/mock.o 00:02:53.537 CC lib/log/log_deprecated.o 00:02:53.537 CC lib/log/log.o 00:02:53.537 LIB libspdk_ut.a 00:02:53.537 LIB libspdk_log.a 00:02:53.537 SO libspdk_ut.so.2.0 00:02:53.537 LIB libspdk_ut_mock.a 00:02:53.537 SO libspdk_log.so.7.0 00:02:53.537 SO libspdk_ut_mock.so.6.0 00:02:53.537 SYMLINK libspdk_ut.so 00:02:53.537 SYMLINK libspdk_log.so 00:02:53.537 SYMLINK libspdk_ut_mock.so 00:02:53.537 CC lib/dma/dma.o 00:02:53.537 CC lib/ioat/ioat.o 00:02:53.537 CXX lib/trace_parser/trace.o 00:02:53.537 CC lib/util/bit_array.o 00:02:53.537 CC lib/util/base64.o 00:02:53.537 CC lib/util/crc16.o 00:02:53.537 CC lib/util/cpuset.o 00:02:53.537 CC lib/util/crc32.o 00:02:53.537 CC lib/util/crc32c.o 00:02:53.537 CC lib/vfio_user/host/vfio_user_pci.o 00:02:53.537 CC lib/util/crc32_ieee.o 00:02:53.537 CC lib/util/crc64.o 00:02:53.537 CC lib/util/dif.o 00:02:53.537 CC lib/util/fd.o 00:02:53.537 CC lib/vfio_user/host/vfio_user.o 00:02:53.537 LIB libspdk_dma.a 00:02:53.537 SO libspdk_dma.so.5.0 00:02:53.537 CC lib/util/fd_group.o 00:02:53.795 CC lib/util/file.o 00:02:53.795 SYMLINK libspdk_dma.so 00:02:53.795 CC lib/util/hexlify.o 00:02:53.795 CC lib/util/iov.o 00:02:53.795 LIB libspdk_ioat.a 00:02:53.795 SO libspdk_ioat.so.7.0 00:02:53.795 CC lib/util/math.o 00:02:53.795 CC lib/util/net.o 00:02:53.795 LIB libspdk_vfio_user.a 00:02:53.795 SYMLINK libspdk_ioat.so 00:02:53.795 CC lib/util/pipe.o 00:02:53.795 SO libspdk_vfio_user.so.5.0 00:02:53.795 CC lib/util/strerror_tls.o 00:02:53.795 CC lib/util/string.o 00:02:54.054 SYMLINK libspdk_vfio_user.so 00:02:54.054 CC lib/util/uuid.o 00:02:54.054 CC lib/util/xor.o 00:02:54.054 CC lib/util/zipf.o 00:02:54.054 CC lib/util/md5.o 00:02:54.313 LIB libspdk_util.a 00:02:54.313 SO libspdk_util.so.10.0 00:02:54.573 SYMLINK libspdk_util.so 00:02:54.573 LIB libspdk_trace_parser.a 00:02:54.573 SO libspdk_trace_parser.so.6.0 00:02:54.573 CC lib/idxd/idxd_user.o 00:02:54.573 CC lib/rdma_utils/rdma_utils.o 00:02:54.573 CC lib/idxd/idxd.o 00:02:54.573 CC lib/idxd/idxd_kernel.o 00:02:54.573 CC lib/json/json_parse.o 00:02:54.573 SYMLINK libspdk_trace_parser.so 00:02:54.573 CC lib/vmd/vmd.o 00:02:54.573 CC lib/vmd/led.o 00:02:54.573 CC lib/conf/conf.o 00:02:54.573 CC lib/env_dpdk/env.o 00:02:54.573 CC lib/rdma_provider/common.o 00:02:54.832 CC lib/json/json_util.o 00:02:54.832 CC lib/json/json_write.o 00:02:54.832 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:54.832 CC lib/env_dpdk/memory.o 00:02:54.832 CC lib/env_dpdk/pci.o 00:02:55.091 LIB libspdk_conf.a 00:02:55.091 LIB libspdk_rdma_utils.a 00:02:55.091 SO libspdk_conf.so.6.0 00:02:55.091 SO libspdk_rdma_utils.so.1.0 00:02:55.091 SYMLINK libspdk_rdma_utils.so 00:02:55.091 SYMLINK libspdk_conf.so 00:02:55.091 CC lib/env_dpdk/init.o 00:02:55.091 CC lib/env_dpdk/threads.o 00:02:55.091 CC lib/env_dpdk/pci_ioat.o 00:02:55.091 LIB libspdk_rdma_provider.a 00:02:55.091 SO libspdk_rdma_provider.so.6.0 00:02:55.091 LIB libspdk_json.a 00:02:55.091 SYMLINK libspdk_rdma_provider.so 00:02:55.091 CC lib/env_dpdk/pci_virtio.o 00:02:55.091 SO libspdk_json.so.6.0 00:02:55.091 CC lib/env_dpdk/pci_vmd.o 00:02:55.091 CC lib/env_dpdk/pci_idxd.o 00:02:55.350 SYMLINK libspdk_json.so 00:02:55.350 LIB libspdk_idxd.a 00:02:55.350 CC lib/env_dpdk/pci_event.o 00:02:55.350 CC lib/env_dpdk/sigbus_handler.o 00:02:55.350 CC lib/env_dpdk/pci_dpdk.o 00:02:55.350 SO libspdk_idxd.so.12.1 00:02:55.350 LIB libspdk_vmd.a 00:02:55.350 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:55.350 SO libspdk_vmd.so.6.0 00:02:55.350 SYMLINK libspdk_idxd.so 00:02:55.350 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:55.350 SYMLINK libspdk_vmd.so 00:02:55.608 CC lib/jsonrpc/jsonrpc_server.o 00:02:55.608 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:55.608 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:55.608 CC lib/jsonrpc/jsonrpc_client.o 00:02:55.867 LIB libspdk_jsonrpc.a 00:02:55.867 SO libspdk_jsonrpc.so.6.0 00:02:55.867 SYMLINK libspdk_jsonrpc.so 00:02:56.126 CC lib/rpc/rpc.o 00:02:56.385 LIB libspdk_env_dpdk.a 00:02:56.385 SO libspdk_env_dpdk.so.15.0 00:02:56.385 LIB libspdk_rpc.a 00:02:56.385 SO libspdk_rpc.so.6.0 00:02:56.644 SYMLINK libspdk_rpc.so 00:02:56.644 SYMLINK libspdk_env_dpdk.so 00:02:56.644 CC lib/trace/trace.o 00:02:56.644 CC lib/notify/notify.o 00:02:56.644 CC lib/notify/notify_rpc.o 00:02:56.644 CC lib/trace/trace_flags.o 00:02:56.644 CC lib/trace/trace_rpc.o 00:02:56.644 CC lib/keyring/keyring.o 00:02:56.644 CC lib/keyring/keyring_rpc.o 00:02:56.902 LIB libspdk_notify.a 00:02:56.903 SO libspdk_notify.so.6.0 00:02:57.161 LIB libspdk_trace.a 00:02:57.161 LIB libspdk_keyring.a 00:02:57.161 SYMLINK libspdk_notify.so 00:02:57.161 SO libspdk_trace.so.11.0 00:02:57.161 SO libspdk_keyring.so.2.0 00:02:57.161 SYMLINK libspdk_trace.so 00:02:57.161 SYMLINK libspdk_keyring.so 00:02:57.420 CC lib/sock/sock.o 00:02:57.420 CC lib/thread/thread.o 00:02:57.420 CC lib/sock/sock_rpc.o 00:02:57.420 CC lib/thread/iobuf.o 00:02:57.988 LIB libspdk_sock.a 00:02:57.988 SO libspdk_sock.so.10.0 00:02:57.988 SYMLINK libspdk_sock.so 00:02:58.247 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:58.247 CC lib/nvme/nvme_ns_cmd.o 00:02:58.247 CC lib/nvme/nvme_ctrlr.o 00:02:58.247 CC lib/nvme/nvme_fabric.o 00:02:58.247 CC lib/nvme/nvme_ns.o 00:02:58.247 CC lib/nvme/nvme_pcie_common.o 00:02:58.247 CC lib/nvme/nvme_pcie.o 00:02:58.247 CC lib/nvme/nvme_qpair.o 00:02:58.247 CC lib/nvme/nvme.o 00:02:59.184 LIB libspdk_thread.a 00:02:59.184 SO libspdk_thread.so.10.2 00:02:59.184 SYMLINK libspdk_thread.so 00:02:59.184 CC lib/nvme/nvme_quirks.o 00:02:59.184 CC lib/nvme/nvme_transport.o 00:02:59.184 CC lib/nvme/nvme_discovery.o 00:02:59.184 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:59.184 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:59.184 CC lib/nvme/nvme_tcp.o 00:02:59.184 CC lib/nvme/nvme_opal.o 00:02:59.443 CC lib/nvme/nvme_io_msg.o 00:02:59.443 CC lib/nvme/nvme_poll_group.o 00:02:59.702 CC lib/nvme/nvme_zns.o 00:02:59.702 CC lib/nvme/nvme_stubs.o 00:02:59.961 CC lib/nvme/nvme_auth.o 00:02:59.961 CC lib/nvme/nvme_cuse.o 00:02:59.961 CC lib/nvme/nvme_rdma.o 00:03:00.219 CC lib/accel/accel.o 00:03:00.219 CC lib/accel/accel_rpc.o 00:03:00.219 CC lib/blob/blobstore.o 00:03:00.477 CC lib/blob/request.o 00:03:00.477 CC lib/blob/zeroes.o 00:03:00.736 CC lib/init/json_config.o 00:03:00.736 CC lib/blob/blob_bs_dev.o 00:03:00.736 CC lib/accel/accel_sw.o 00:03:00.736 CC lib/init/subsystem.o 00:03:00.995 CC lib/init/subsystem_rpc.o 00:03:00.995 CC lib/init/rpc.o 00:03:00.995 CC lib/virtio/virtio.o 00:03:00.995 CC lib/virtio/virtio_vhost_user.o 00:03:00.995 CC lib/virtio/virtio_vfio_user.o 00:03:00.995 CC lib/fsdev/fsdev.o 00:03:00.995 CC lib/fsdev/fsdev_io.o 00:03:00.995 CC lib/fsdev/fsdev_rpc.o 00:03:01.288 LIB libspdk_init.a 00:03:01.288 SO libspdk_init.so.6.0 00:03:01.288 CC lib/virtio/virtio_pci.o 00:03:01.288 SYMLINK libspdk_init.so 00:03:01.288 LIB libspdk_nvme.a 00:03:01.288 LIB libspdk_accel.a 00:03:01.547 SO libspdk_accel.so.16.0 00:03:01.547 SYMLINK libspdk_accel.so 00:03:01.547 CC lib/event/app.o 00:03:01.547 CC lib/event/reactor.o 00:03:01.547 CC lib/event/app_rpc.o 00:03:01.547 CC lib/event/scheduler_static.o 00:03:01.547 CC lib/event/log_rpc.o 00:03:01.547 LIB libspdk_virtio.a 00:03:01.547 SO libspdk_nvme.so.14.0 00:03:01.547 SO libspdk_virtio.so.7.0 00:03:01.805 LIB libspdk_fsdev.a 00:03:01.805 SYMLINK libspdk_virtio.so 00:03:01.805 CC lib/bdev/bdev.o 00:03:01.805 CC lib/bdev/bdev_rpc.o 00:03:01.805 CC lib/bdev/bdev_zone.o 00:03:01.805 CC lib/bdev/part.o 00:03:01.805 SO libspdk_fsdev.so.1.0 00:03:01.806 CC lib/bdev/scsi_nvme.o 00:03:01.806 SYMLINK libspdk_fsdev.so 00:03:01.806 SYMLINK libspdk_nvme.so 00:03:02.065 LIB libspdk_event.a 00:03:02.065 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:02.065 SO libspdk_event.so.15.0 00:03:02.065 SYMLINK libspdk_event.so 00:03:02.633 LIB libspdk_fuse_dispatcher.a 00:03:02.633 SO libspdk_fuse_dispatcher.so.1.0 00:03:02.891 SYMLINK libspdk_fuse_dispatcher.so 00:03:03.460 LIB libspdk_blob.a 00:03:03.460 SO libspdk_blob.so.11.0 00:03:03.718 SYMLINK libspdk_blob.so 00:03:03.977 CC lib/lvol/lvol.o 00:03:03.977 CC lib/blobfs/blobfs.o 00:03:03.977 CC lib/blobfs/tree.o 00:03:04.547 LIB libspdk_bdev.a 00:03:04.806 SO libspdk_bdev.so.17.0 00:03:04.806 LIB libspdk_blobfs.a 00:03:04.806 SO libspdk_blobfs.so.10.0 00:03:04.806 SYMLINK libspdk_bdev.so 00:03:04.806 SYMLINK libspdk_blobfs.so 00:03:04.806 LIB libspdk_lvol.a 00:03:05.065 SO libspdk_lvol.so.10.0 00:03:05.065 SYMLINK libspdk_lvol.so 00:03:05.065 CC lib/scsi/dev.o 00:03:05.065 CC lib/scsi/lun.o 00:03:05.065 CC lib/scsi/port.o 00:03:05.065 CC lib/scsi/scsi.o 00:03:05.065 CC lib/nbd/nbd.o 00:03:05.065 CC lib/scsi/scsi_bdev.o 00:03:05.065 CC lib/nbd/nbd_rpc.o 00:03:05.065 CC lib/ublk/ublk.o 00:03:05.065 CC lib/ftl/ftl_core.o 00:03:05.065 CC lib/nvmf/ctrlr.o 00:03:05.324 CC lib/scsi/scsi_pr.o 00:03:05.324 CC lib/scsi/scsi_rpc.o 00:03:05.324 CC lib/ublk/ublk_rpc.o 00:03:05.324 CC lib/scsi/task.o 00:03:05.324 CC lib/nvmf/ctrlr_discovery.o 00:03:05.324 CC lib/nvmf/ctrlr_bdev.o 00:03:05.583 LIB libspdk_nbd.a 00:03:05.583 SO libspdk_nbd.so.7.0 00:03:05.583 CC lib/nvmf/subsystem.o 00:03:05.583 CC lib/nvmf/nvmf.o 00:03:05.583 CC lib/ftl/ftl_init.o 00:03:05.583 SYMLINK libspdk_nbd.so 00:03:05.583 CC lib/ftl/ftl_layout.o 00:03:05.583 CC lib/nvmf/nvmf_rpc.o 00:03:05.583 LIB libspdk_scsi.a 00:03:05.583 SO libspdk_scsi.so.9.0 00:03:05.842 LIB libspdk_ublk.a 00:03:05.842 SO libspdk_ublk.so.3.0 00:03:05.842 SYMLINK libspdk_scsi.so 00:03:05.842 CC lib/nvmf/transport.o 00:03:05.842 CC lib/nvmf/tcp.o 00:03:05.842 SYMLINK libspdk_ublk.so 00:03:05.842 CC lib/nvmf/stubs.o 00:03:05.842 CC lib/nvmf/mdns_server.o 00:03:05.842 CC lib/ftl/ftl_debug.o 00:03:06.102 CC lib/ftl/ftl_io.o 00:03:06.360 CC lib/nvmf/rdma.o 00:03:06.360 CC lib/iscsi/conn.o 00:03:06.360 CC lib/ftl/ftl_sb.o 00:03:06.360 CC lib/iscsi/init_grp.o 00:03:06.360 CC lib/nvmf/auth.o 00:03:06.619 CC lib/iscsi/iscsi.o 00:03:06.619 CC lib/vhost/vhost.o 00:03:06.619 CC lib/ftl/ftl_l2p.o 00:03:06.878 CC lib/iscsi/param.o 00:03:06.878 CC lib/vhost/vhost_rpc.o 00:03:06.878 CC lib/ftl/ftl_l2p_flat.o 00:03:06.878 CC lib/ftl/ftl_nv_cache.o 00:03:06.878 CC lib/iscsi/portal_grp.o 00:03:07.137 CC lib/vhost/vhost_scsi.o 00:03:07.137 CC lib/iscsi/tgt_node.o 00:03:07.137 CC lib/vhost/vhost_blk.o 00:03:07.396 CC lib/vhost/rte_vhost_user.o 00:03:07.396 CC lib/ftl/ftl_band.o 00:03:07.396 CC lib/iscsi/iscsi_subsystem.o 00:03:07.396 CC lib/iscsi/iscsi_rpc.o 00:03:07.655 CC lib/iscsi/task.o 00:03:07.655 CC lib/ftl/ftl_band_ops.o 00:03:07.655 CC lib/ftl/ftl_writer.o 00:03:07.914 CC lib/ftl/ftl_rq.o 00:03:07.914 CC lib/ftl/ftl_reloc.o 00:03:07.914 LIB libspdk_iscsi.a 00:03:07.914 CC lib/ftl/ftl_l2p_cache.o 00:03:07.914 SO libspdk_iscsi.so.8.0 00:03:07.914 CC lib/ftl/ftl_p2l.o 00:03:07.914 CC lib/ftl/ftl_p2l_log.o 00:03:07.914 CC lib/ftl/mngt/ftl_mngt.o 00:03:08.174 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:08.174 SYMLINK libspdk_iscsi.so 00:03:08.174 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:08.174 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:08.174 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:08.174 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:08.433 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:08.433 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:08.433 LIB libspdk_vhost.a 00:03:08.433 LIB libspdk_nvmf.a 00:03:08.433 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:08.433 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:08.433 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:08.433 SO libspdk_vhost.so.8.0 00:03:08.433 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:08.433 SO libspdk_nvmf.so.19.0 00:03:08.693 SYMLINK libspdk_vhost.so 00:03:08.693 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:08.693 CC lib/ftl/utils/ftl_conf.o 00:03:08.693 CC lib/ftl/utils/ftl_md.o 00:03:08.693 CC lib/ftl/utils/ftl_mempool.o 00:03:08.693 CC lib/ftl/utils/ftl_bitmap.o 00:03:08.693 CC lib/ftl/utils/ftl_property.o 00:03:08.693 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:08.693 SYMLINK libspdk_nvmf.so 00:03:08.693 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:08.693 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:08.693 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:08.693 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:08.693 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:08.952 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:08.952 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:08.952 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:08.952 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:08.952 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:08.952 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:08.952 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:08.952 CC lib/ftl/base/ftl_base_dev.o 00:03:08.952 CC lib/ftl/base/ftl_base_bdev.o 00:03:08.952 CC lib/ftl/ftl_trace.o 00:03:09.212 LIB libspdk_ftl.a 00:03:09.471 SO libspdk_ftl.so.9.0 00:03:09.730 SYMLINK libspdk_ftl.so 00:03:10.300 CC module/env_dpdk/env_dpdk_rpc.o 00:03:10.300 CC module/accel/error/accel_error.o 00:03:10.300 CC module/sock/uring/uring.o 00:03:10.300 CC module/keyring/linux/keyring.o 00:03:10.300 CC module/keyring/file/keyring.o 00:03:10.300 CC module/blob/bdev/blob_bdev.o 00:03:10.300 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:10.300 CC module/sock/posix/posix.o 00:03:10.300 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:10.300 CC module/fsdev/aio/fsdev_aio.o 00:03:10.300 LIB libspdk_env_dpdk_rpc.a 00:03:10.300 SO libspdk_env_dpdk_rpc.so.6.0 00:03:10.301 CC module/keyring/file/keyring_rpc.o 00:03:10.560 CC module/keyring/linux/keyring_rpc.o 00:03:10.560 CC module/accel/error/accel_error_rpc.o 00:03:10.560 LIB libspdk_scheduler_dpdk_governor.a 00:03:10.560 SYMLINK libspdk_env_dpdk_rpc.so 00:03:10.560 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:10.560 LIB libspdk_scheduler_dynamic.a 00:03:10.560 SO libspdk_scheduler_dynamic.so.4.0 00:03:10.560 LIB libspdk_blob_bdev.a 00:03:10.560 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:10.560 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:10.560 SO libspdk_blob_bdev.so.11.0 00:03:10.560 SYMLINK libspdk_scheduler_dynamic.so 00:03:10.560 LIB libspdk_keyring_file.a 00:03:10.560 LIB libspdk_keyring_linux.a 00:03:10.560 LIB libspdk_accel_error.a 00:03:10.560 SO libspdk_keyring_file.so.2.0 00:03:10.560 SO libspdk_keyring_linux.so.1.0 00:03:10.560 SO libspdk_accel_error.so.2.0 00:03:10.560 SYMLINK libspdk_blob_bdev.so 00:03:10.560 CC module/accel/ioat/accel_ioat.o 00:03:10.560 CC module/fsdev/aio/linux_aio_mgr.o 00:03:10.819 SYMLINK libspdk_keyring_linux.so 00:03:10.819 SYMLINK libspdk_accel_error.so 00:03:10.819 SYMLINK libspdk_keyring_file.so 00:03:10.819 CC module/scheduler/gscheduler/gscheduler.o 00:03:10.819 CC module/accel/ioat/accel_ioat_rpc.o 00:03:10.819 CC module/accel/dsa/accel_dsa.o 00:03:10.819 CC module/accel/dsa/accel_dsa_rpc.o 00:03:10.819 LIB libspdk_fsdev_aio.a 00:03:10.819 CC module/accel/iaa/accel_iaa.o 00:03:10.819 SO libspdk_fsdev_aio.so.1.0 00:03:11.078 LIB libspdk_sock_uring.a 00:03:11.078 LIB libspdk_scheduler_gscheduler.a 00:03:11.078 CC module/bdev/delay/vbdev_delay.o 00:03:11.078 SO libspdk_scheduler_gscheduler.so.4.0 00:03:11.078 SO libspdk_sock_uring.so.5.0 00:03:11.078 SYMLINK libspdk_fsdev_aio.so 00:03:11.078 CC module/accel/iaa/accel_iaa_rpc.o 00:03:11.078 CC module/blobfs/bdev/blobfs_bdev.o 00:03:11.078 LIB libspdk_accel_ioat.a 00:03:11.078 SYMLINK libspdk_sock_uring.so 00:03:11.078 SYMLINK libspdk_scheduler_gscheduler.so 00:03:11.078 LIB libspdk_sock_posix.a 00:03:11.078 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:11.078 SO libspdk_accel_ioat.so.6.0 00:03:11.078 SO libspdk_sock_posix.so.6.0 00:03:11.078 SYMLINK libspdk_accel_ioat.so 00:03:11.078 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:11.078 SYMLINK libspdk_sock_posix.so 00:03:11.078 LIB libspdk_accel_iaa.a 00:03:11.078 LIB libspdk_accel_dsa.a 00:03:11.078 SO libspdk_accel_iaa.so.3.0 00:03:11.337 CC module/bdev/error/vbdev_error.o 00:03:11.337 SO libspdk_accel_dsa.so.5.0 00:03:11.337 CC module/bdev/gpt/gpt.o 00:03:11.337 LIB libspdk_blobfs_bdev.a 00:03:11.337 SYMLINK libspdk_accel_iaa.so 00:03:11.337 SYMLINK libspdk_accel_dsa.so 00:03:11.337 SO libspdk_blobfs_bdev.so.6.0 00:03:11.337 CC module/bdev/lvol/vbdev_lvol.o 00:03:11.337 CC module/bdev/gpt/vbdev_gpt.o 00:03:11.337 LIB libspdk_bdev_delay.a 00:03:11.337 CC module/bdev/malloc/bdev_malloc.o 00:03:11.337 SYMLINK libspdk_blobfs_bdev.so 00:03:11.337 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:11.337 SO libspdk_bdev_delay.so.6.0 00:03:11.337 CC module/bdev/null/bdev_null.o 00:03:11.337 CC module/bdev/passthru/vbdev_passthru.o 00:03:11.337 CC module/bdev/nvme/bdev_nvme.o 00:03:11.337 SYMLINK libspdk_bdev_delay.so 00:03:11.337 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:11.596 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:11.596 CC module/bdev/error/vbdev_error_rpc.o 00:03:11.596 LIB libspdk_bdev_gpt.a 00:03:11.596 SO libspdk_bdev_gpt.so.6.0 00:03:11.596 LIB libspdk_bdev_error.a 00:03:11.596 SYMLINK libspdk_bdev_gpt.so 00:03:11.596 SO libspdk_bdev_error.so.6.0 00:03:11.856 LIB libspdk_bdev_malloc.a 00:03:11.856 LIB libspdk_bdev_passthru.a 00:03:11.856 SO libspdk_bdev_malloc.so.6.0 00:03:11.856 CC module/bdev/null/bdev_null_rpc.o 00:03:11.856 SYMLINK libspdk_bdev_error.so 00:03:11.856 SO libspdk_bdev_passthru.so.6.0 00:03:11.856 CC module/bdev/raid/bdev_raid.o 00:03:11.856 LIB libspdk_bdev_lvol.a 00:03:11.856 SYMLINK libspdk_bdev_malloc.so 00:03:11.856 CC module/bdev/split/vbdev_split.o 00:03:11.856 CC module/bdev/raid/bdev_raid_rpc.o 00:03:11.856 SO libspdk_bdev_lvol.so.6.0 00:03:11.856 SYMLINK libspdk_bdev_passthru.so 00:03:11.856 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:11.856 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:11.856 CC module/bdev/uring/bdev_uring.o 00:03:11.856 SYMLINK libspdk_bdev_lvol.so 00:03:11.856 CC module/bdev/aio/bdev_aio.o 00:03:11.856 CC module/bdev/uring/bdev_uring_rpc.o 00:03:11.856 LIB libspdk_bdev_null.a 00:03:12.116 SO libspdk_bdev_null.so.6.0 00:03:12.116 SYMLINK libspdk_bdev_null.so 00:03:12.116 CC module/bdev/nvme/nvme_rpc.o 00:03:12.116 CC module/bdev/nvme/bdev_mdns_client.o 00:03:12.116 CC module/bdev/split/vbdev_split_rpc.o 00:03:12.116 CC module/bdev/raid/bdev_raid_sb.o 00:03:12.374 LIB libspdk_bdev_uring.a 00:03:12.374 CC module/bdev/nvme/vbdev_opal.o 00:03:12.374 LIB libspdk_bdev_split.a 00:03:12.375 CC module/bdev/aio/bdev_aio_rpc.o 00:03:12.375 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:12.375 SO libspdk_bdev_uring.so.6.0 00:03:12.375 SO libspdk_bdev_split.so.6.0 00:03:12.375 SYMLINK libspdk_bdev_uring.so 00:03:12.375 SYMLINK libspdk_bdev_split.so 00:03:12.375 CC module/bdev/raid/raid0.o 00:03:12.634 LIB libspdk_bdev_zone_block.a 00:03:12.634 LIB libspdk_bdev_aio.a 00:03:12.634 CC module/bdev/raid/raid1.o 00:03:12.634 SO libspdk_bdev_aio.so.6.0 00:03:12.634 SO libspdk_bdev_zone_block.so.6.0 00:03:12.634 CC module/bdev/ftl/bdev_ftl.o 00:03:12.634 CC module/bdev/iscsi/bdev_iscsi.o 00:03:12.634 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:12.634 SYMLINK libspdk_bdev_zone_block.so 00:03:12.634 SYMLINK libspdk_bdev_aio.so 00:03:12.634 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:12.634 CC module/bdev/raid/concat.o 00:03:12.634 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:12.892 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:12.892 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:12.892 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:12.892 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:12.892 LIB libspdk_bdev_raid.a 00:03:12.892 LIB libspdk_bdev_ftl.a 00:03:12.892 SO libspdk_bdev_ftl.so.6.0 00:03:12.892 SO libspdk_bdev_raid.so.6.0 00:03:12.892 LIB libspdk_bdev_iscsi.a 00:03:12.892 SYMLINK libspdk_bdev_ftl.so 00:03:12.892 SO libspdk_bdev_iscsi.so.6.0 00:03:12.892 SYMLINK libspdk_bdev_raid.so 00:03:13.150 SYMLINK libspdk_bdev_iscsi.so 00:03:13.150 LIB libspdk_bdev_virtio.a 00:03:13.150 SO libspdk_bdev_virtio.so.6.0 00:03:13.150 SYMLINK libspdk_bdev_virtio.so 00:03:13.718 LIB libspdk_bdev_nvme.a 00:03:13.718 SO libspdk_bdev_nvme.so.7.0 00:03:13.977 SYMLINK libspdk_bdev_nvme.so 00:03:14.545 CC module/event/subsystems/scheduler/scheduler.o 00:03:14.545 CC module/event/subsystems/fsdev/fsdev.o 00:03:14.545 CC module/event/subsystems/sock/sock.o 00:03:14.545 CC module/event/subsystems/keyring/keyring.o 00:03:14.545 CC module/event/subsystems/iobuf/iobuf.o 00:03:14.545 CC module/event/subsystems/vmd/vmd.o 00:03:14.545 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:14.545 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:14.545 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:14.545 LIB libspdk_event_scheduler.a 00:03:14.545 LIB libspdk_event_keyring.a 00:03:14.545 LIB libspdk_event_sock.a 00:03:14.545 LIB libspdk_event_iobuf.a 00:03:14.545 LIB libspdk_event_fsdev.a 00:03:14.545 LIB libspdk_event_vhost_blk.a 00:03:14.545 LIB libspdk_event_vmd.a 00:03:14.545 SO libspdk_event_scheduler.so.4.0 00:03:14.545 SO libspdk_event_sock.so.5.0 00:03:14.545 SO libspdk_event_keyring.so.1.0 00:03:14.545 SO libspdk_event_fsdev.so.1.0 00:03:14.545 SO libspdk_event_iobuf.so.3.0 00:03:14.545 SO libspdk_event_vhost_blk.so.3.0 00:03:14.546 SO libspdk_event_vmd.so.6.0 00:03:14.804 SYMLINK libspdk_event_sock.so 00:03:14.804 SYMLINK libspdk_event_scheduler.so 00:03:14.804 SYMLINK libspdk_event_keyring.so 00:03:14.804 SYMLINK libspdk_event_fsdev.so 00:03:14.804 SYMLINK libspdk_event_iobuf.so 00:03:14.804 SYMLINK libspdk_event_vhost_blk.so 00:03:14.804 SYMLINK libspdk_event_vmd.so 00:03:15.063 CC module/event/subsystems/accel/accel.o 00:03:15.063 LIB libspdk_event_accel.a 00:03:15.063 SO libspdk_event_accel.so.6.0 00:03:15.322 SYMLINK libspdk_event_accel.so 00:03:15.581 CC module/event/subsystems/bdev/bdev.o 00:03:15.581 LIB libspdk_event_bdev.a 00:03:15.840 SO libspdk_event_bdev.so.6.0 00:03:15.840 SYMLINK libspdk_event_bdev.so 00:03:16.098 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:16.098 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:16.098 CC module/event/subsystems/scsi/scsi.o 00:03:16.098 CC module/event/subsystems/nbd/nbd.o 00:03:16.099 CC module/event/subsystems/ublk/ublk.o 00:03:16.099 LIB libspdk_event_nbd.a 00:03:16.099 LIB libspdk_event_ublk.a 00:03:16.382 SO libspdk_event_nbd.so.6.0 00:03:16.382 LIB libspdk_event_scsi.a 00:03:16.382 SO libspdk_event_ublk.so.3.0 00:03:16.382 SO libspdk_event_scsi.so.6.0 00:03:16.382 SYMLINK libspdk_event_nbd.so 00:03:16.382 LIB libspdk_event_nvmf.a 00:03:16.382 SYMLINK libspdk_event_ublk.so 00:03:16.382 SYMLINK libspdk_event_scsi.so 00:03:16.382 SO libspdk_event_nvmf.so.6.0 00:03:16.382 SYMLINK libspdk_event_nvmf.so 00:03:16.650 CC module/event/subsystems/iscsi/iscsi.o 00:03:16.650 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:16.908 LIB libspdk_event_vhost_scsi.a 00:03:16.908 LIB libspdk_event_iscsi.a 00:03:16.908 SO libspdk_event_vhost_scsi.so.3.0 00:03:16.908 SO libspdk_event_iscsi.so.6.0 00:03:16.908 SYMLINK libspdk_event_vhost_scsi.so 00:03:16.908 SYMLINK libspdk_event_iscsi.so 00:03:17.167 SO libspdk.so.6.0 00:03:17.167 SYMLINK libspdk.so 00:03:17.426 CC app/trace_record/trace_record.o 00:03:17.426 TEST_HEADER include/spdk/accel.h 00:03:17.426 TEST_HEADER include/spdk/accel_module.h 00:03:17.426 CC test/rpc_client/rpc_client_test.o 00:03:17.426 TEST_HEADER include/spdk/assert.h 00:03:17.426 CXX app/trace/trace.o 00:03:17.426 TEST_HEADER include/spdk/barrier.h 00:03:17.426 TEST_HEADER include/spdk/base64.h 00:03:17.426 TEST_HEADER include/spdk/bdev.h 00:03:17.426 TEST_HEADER include/spdk/bdev_module.h 00:03:17.426 TEST_HEADER include/spdk/bdev_zone.h 00:03:17.426 TEST_HEADER include/spdk/bit_array.h 00:03:17.426 TEST_HEADER include/spdk/bit_pool.h 00:03:17.426 TEST_HEADER include/spdk/blob_bdev.h 00:03:17.426 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:17.426 TEST_HEADER include/spdk/blobfs.h 00:03:17.426 TEST_HEADER include/spdk/blob.h 00:03:17.426 TEST_HEADER include/spdk/conf.h 00:03:17.426 TEST_HEADER include/spdk/config.h 00:03:17.426 CC app/nvmf_tgt/nvmf_main.o 00:03:17.426 TEST_HEADER include/spdk/cpuset.h 00:03:17.426 TEST_HEADER include/spdk/crc16.h 00:03:17.426 TEST_HEADER include/spdk/crc32.h 00:03:17.426 TEST_HEADER include/spdk/crc64.h 00:03:17.426 TEST_HEADER include/spdk/dif.h 00:03:17.426 TEST_HEADER include/spdk/dma.h 00:03:17.426 TEST_HEADER include/spdk/endian.h 00:03:17.426 TEST_HEADER include/spdk/env_dpdk.h 00:03:17.426 TEST_HEADER include/spdk/env.h 00:03:17.426 TEST_HEADER include/spdk/event.h 00:03:17.426 TEST_HEADER include/spdk/fd_group.h 00:03:17.426 TEST_HEADER include/spdk/fd.h 00:03:17.426 TEST_HEADER include/spdk/file.h 00:03:17.426 TEST_HEADER include/spdk/fsdev.h 00:03:17.426 TEST_HEADER include/spdk/fsdev_module.h 00:03:17.426 TEST_HEADER include/spdk/ftl.h 00:03:17.426 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:17.426 TEST_HEADER include/spdk/gpt_spec.h 00:03:17.426 TEST_HEADER include/spdk/hexlify.h 00:03:17.426 CC test/thread/poller_perf/poller_perf.o 00:03:17.426 TEST_HEADER include/spdk/histogram_data.h 00:03:17.426 TEST_HEADER include/spdk/idxd.h 00:03:17.426 TEST_HEADER include/spdk/idxd_spec.h 00:03:17.426 TEST_HEADER include/spdk/init.h 00:03:17.426 TEST_HEADER include/spdk/ioat.h 00:03:17.426 TEST_HEADER include/spdk/ioat_spec.h 00:03:17.426 TEST_HEADER include/spdk/iscsi_spec.h 00:03:17.426 CC examples/util/zipf/zipf.o 00:03:17.426 TEST_HEADER include/spdk/json.h 00:03:17.426 TEST_HEADER include/spdk/jsonrpc.h 00:03:17.426 TEST_HEADER include/spdk/keyring.h 00:03:17.426 TEST_HEADER include/spdk/keyring_module.h 00:03:17.426 TEST_HEADER include/spdk/likely.h 00:03:17.426 TEST_HEADER include/spdk/log.h 00:03:17.426 TEST_HEADER include/spdk/lvol.h 00:03:17.426 TEST_HEADER include/spdk/md5.h 00:03:17.426 TEST_HEADER include/spdk/memory.h 00:03:17.426 TEST_HEADER include/spdk/mmio.h 00:03:17.426 TEST_HEADER include/spdk/nbd.h 00:03:17.426 CC test/app/bdev_svc/bdev_svc.o 00:03:17.426 CC test/dma/test_dma/test_dma.o 00:03:17.426 TEST_HEADER include/spdk/net.h 00:03:17.426 TEST_HEADER include/spdk/notify.h 00:03:17.426 TEST_HEADER include/spdk/nvme.h 00:03:17.426 TEST_HEADER include/spdk/nvme_intel.h 00:03:17.426 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:17.426 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:17.426 TEST_HEADER include/spdk/nvme_spec.h 00:03:17.426 TEST_HEADER include/spdk/nvme_zns.h 00:03:17.426 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:17.426 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:17.426 TEST_HEADER include/spdk/nvmf.h 00:03:17.426 TEST_HEADER include/spdk/nvmf_spec.h 00:03:17.426 TEST_HEADER include/spdk/nvmf_transport.h 00:03:17.426 TEST_HEADER include/spdk/opal.h 00:03:17.426 TEST_HEADER include/spdk/opal_spec.h 00:03:17.426 TEST_HEADER include/spdk/pci_ids.h 00:03:17.426 TEST_HEADER include/spdk/pipe.h 00:03:17.426 TEST_HEADER include/spdk/queue.h 00:03:17.426 TEST_HEADER include/spdk/reduce.h 00:03:17.426 TEST_HEADER include/spdk/rpc.h 00:03:17.426 TEST_HEADER include/spdk/scheduler.h 00:03:17.426 CC test/env/mem_callbacks/mem_callbacks.o 00:03:17.426 TEST_HEADER include/spdk/scsi.h 00:03:17.426 TEST_HEADER include/spdk/scsi_spec.h 00:03:17.426 TEST_HEADER include/spdk/sock.h 00:03:17.426 TEST_HEADER include/spdk/stdinc.h 00:03:17.426 TEST_HEADER include/spdk/string.h 00:03:17.426 TEST_HEADER include/spdk/thread.h 00:03:17.426 TEST_HEADER include/spdk/trace.h 00:03:17.685 TEST_HEADER include/spdk/trace_parser.h 00:03:17.685 TEST_HEADER include/spdk/tree.h 00:03:17.685 TEST_HEADER include/spdk/ublk.h 00:03:17.685 TEST_HEADER include/spdk/util.h 00:03:17.685 TEST_HEADER include/spdk/uuid.h 00:03:17.685 TEST_HEADER include/spdk/version.h 00:03:17.685 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:17.685 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:17.685 TEST_HEADER include/spdk/vhost.h 00:03:17.685 TEST_HEADER include/spdk/vmd.h 00:03:17.685 TEST_HEADER include/spdk/xor.h 00:03:17.685 TEST_HEADER include/spdk/zipf.h 00:03:17.685 CXX test/cpp_headers/accel.o 00:03:17.685 LINK rpc_client_test 00:03:17.685 LINK nvmf_tgt 00:03:17.685 LINK poller_perf 00:03:17.685 LINK spdk_trace_record 00:03:17.685 LINK zipf 00:03:17.685 LINK bdev_svc 00:03:17.685 CXX test/cpp_headers/accel_module.o 00:03:17.685 LINK spdk_trace 00:03:17.944 CC test/env/vtophys/vtophys.o 00:03:17.944 CC test/env/memory/memory_ut.o 00:03:17.944 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:17.944 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:17.944 CC examples/ioat/perf/perf.o 00:03:17.944 CXX test/cpp_headers/assert.o 00:03:17.944 LINK test_dma 00:03:18.203 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:18.203 LINK vtophys 00:03:18.203 CC app/iscsi_tgt/iscsi_tgt.o 00:03:18.203 LINK env_dpdk_post_init 00:03:18.203 LINK interrupt_tgt 00:03:18.203 CXX test/cpp_headers/barrier.o 00:03:18.203 LINK ioat_perf 00:03:18.203 LINK mem_callbacks 00:03:18.203 CXX test/cpp_headers/base64.o 00:03:18.203 CXX test/cpp_headers/bdev.o 00:03:18.203 CXX test/cpp_headers/bdev_module.o 00:03:18.203 CXX test/cpp_headers/bdev_zone.o 00:03:18.462 LINK iscsi_tgt 00:03:18.462 CC examples/ioat/verify/verify.o 00:03:18.462 LINK nvme_fuzz 00:03:18.462 CC test/event/event_perf/event_perf.o 00:03:18.462 CXX test/cpp_headers/bit_array.o 00:03:18.462 CC test/nvme/aer/aer.o 00:03:18.721 CC test/env/pci/pci_ut.o 00:03:18.721 CC test/accel/dif/dif.o 00:03:18.721 CC test/blobfs/mkfs/mkfs.o 00:03:18.721 LINK verify 00:03:18.721 LINK event_perf 00:03:18.721 CXX test/cpp_headers/bit_pool.o 00:03:18.721 CC app/spdk_tgt/spdk_tgt.o 00:03:18.721 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:18.980 LINK mkfs 00:03:18.980 CXX test/cpp_headers/blob_bdev.o 00:03:18.980 LINK aer 00:03:18.980 CC test/event/reactor/reactor.o 00:03:18.980 LINK pci_ut 00:03:18.980 LINK spdk_tgt 00:03:18.980 CC examples/thread/thread/thread_ex.o 00:03:18.980 LINK reactor 00:03:19.239 CXX test/cpp_headers/blobfs_bdev.o 00:03:19.239 CC test/nvme/reset/reset.o 00:03:19.239 CC test/event/reactor_perf/reactor_perf.o 00:03:19.239 LINK memory_ut 00:03:19.239 CXX test/cpp_headers/blobfs.o 00:03:19.239 CXX test/cpp_headers/blob.o 00:03:19.239 LINK thread 00:03:19.239 CC app/spdk_lspci/spdk_lspci.o 00:03:19.239 LINK reactor_perf 00:03:19.239 CXX test/cpp_headers/conf.o 00:03:19.239 LINK dif 00:03:19.498 LINK reset 00:03:19.498 CXX test/cpp_headers/config.o 00:03:19.498 CC app/spdk_nvme_perf/perf.o 00:03:19.498 LINK spdk_lspci 00:03:19.498 CXX test/cpp_headers/cpuset.o 00:03:19.498 CC examples/sock/hello_world/hello_sock.o 00:03:19.498 CC test/event/app_repeat/app_repeat.o 00:03:19.757 CC test/event/scheduler/scheduler.o 00:03:19.757 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:19.757 CXX test/cpp_headers/crc16.o 00:03:19.757 CC test/nvme/sgl/sgl.o 00:03:19.757 LINK app_repeat 00:03:19.757 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:19.757 LINK hello_sock 00:03:20.017 CC examples/vmd/lsvmd/lsvmd.o 00:03:20.017 CC test/lvol/esnap/esnap.o 00:03:20.017 LINK scheduler 00:03:20.017 CXX test/cpp_headers/crc32.o 00:03:20.017 CXX test/cpp_headers/crc64.o 00:03:20.017 LINK lsvmd 00:03:20.017 LINK sgl 00:03:20.277 CXX test/cpp_headers/dif.o 00:03:20.277 CC examples/idxd/perf/perf.o 00:03:20.277 LINK vhost_fuzz 00:03:20.277 CC examples/vmd/led/led.o 00:03:20.277 CC test/nvme/e2edp/nvme_dp.o 00:03:20.277 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:20.277 CC examples/accel/perf/accel_perf.o 00:03:20.277 CXX test/cpp_headers/dma.o 00:03:20.537 LINK spdk_nvme_perf 00:03:20.537 LINK iscsi_fuzz 00:03:20.537 LINK led 00:03:20.537 CXX test/cpp_headers/endian.o 00:03:20.537 LINK idxd_perf 00:03:20.537 LINK nvme_dp 00:03:20.537 LINK hello_fsdev 00:03:20.796 CC examples/blob/hello_world/hello_blob.o 00:03:20.796 CC app/spdk_nvme_identify/identify.o 00:03:20.796 CXX test/cpp_headers/env_dpdk.o 00:03:20.796 CC test/app/histogram_perf/histogram_perf.o 00:03:20.796 CC test/app/jsoncat/jsoncat.o 00:03:20.796 LINK accel_perf 00:03:20.796 CC examples/nvme/hello_world/hello_world.o 00:03:20.796 CC test/nvme/overhead/overhead.o 00:03:21.056 CXX test/cpp_headers/env.o 00:03:21.056 LINK hello_blob 00:03:21.056 CC examples/nvme/reconnect/reconnect.o 00:03:21.056 LINK jsoncat 00:03:21.056 LINK histogram_perf 00:03:21.056 CC app/spdk_nvme_discover/discovery_aer.o 00:03:21.056 CXX test/cpp_headers/event.o 00:03:21.056 LINK hello_world 00:03:21.056 CXX test/cpp_headers/fd_group.o 00:03:21.315 LINK overhead 00:03:21.315 CC test/app/stub/stub.o 00:03:21.315 CC examples/blob/cli/blobcli.o 00:03:21.316 LINK spdk_nvme_discover 00:03:21.316 CXX test/cpp_headers/fd.o 00:03:21.316 LINK reconnect 00:03:21.316 LINK stub 00:03:21.575 CC test/nvme/err_injection/err_injection.o 00:03:21.575 CXX test/cpp_headers/file.o 00:03:21.575 CC test/bdev/bdevio/bdevio.o 00:03:21.575 LINK spdk_nvme_identify 00:03:21.575 CC examples/bdev/hello_world/hello_bdev.o 00:03:21.575 CC app/spdk_top/spdk_top.o 00:03:21.575 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:21.575 LINK err_injection 00:03:21.575 CC examples/nvme/arbitration/arbitration.o 00:03:21.575 CXX test/cpp_headers/fsdev.o 00:03:21.834 CC examples/nvme/hotplug/hotplug.o 00:03:21.834 LINK blobcli 00:03:21.834 LINK hello_bdev 00:03:21.834 CXX test/cpp_headers/fsdev_module.o 00:03:21.834 CC test/nvme/startup/startup.o 00:03:21.834 LINK bdevio 00:03:22.093 LINK hotplug 00:03:22.093 CC test/nvme/reserve/reserve.o 00:03:22.093 CXX test/cpp_headers/ftl.o 00:03:22.093 LINK arbitration 00:03:22.093 LINK nvme_manage 00:03:22.093 CXX test/cpp_headers/fuse_dispatcher.o 00:03:22.093 LINK startup 00:03:22.093 CC examples/bdev/bdevperf/bdevperf.o 00:03:22.093 CXX test/cpp_headers/gpt_spec.o 00:03:22.352 LINK reserve 00:03:22.352 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:22.352 CXX test/cpp_headers/hexlify.o 00:03:22.352 CC examples/nvme/abort/abort.o 00:03:22.352 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:22.352 CC test/nvme/simple_copy/simple_copy.o 00:03:22.352 CC test/nvme/connect_stress/connect_stress.o 00:03:22.611 LINK spdk_top 00:03:22.611 LINK cmb_copy 00:03:22.611 CXX test/cpp_headers/histogram_data.o 00:03:22.611 CC test/nvme/boot_partition/boot_partition.o 00:03:22.611 LINK pmr_persistence 00:03:22.611 LINK connect_stress 00:03:22.611 LINK simple_copy 00:03:22.611 CXX test/cpp_headers/idxd.o 00:03:22.870 LINK boot_partition 00:03:22.870 LINK abort 00:03:22.870 CXX test/cpp_headers/idxd_spec.o 00:03:22.870 CC test/nvme/compliance/nvme_compliance.o 00:03:22.870 CC app/vhost/vhost.o 00:03:22.870 CC test/nvme/fused_ordering/fused_ordering.o 00:03:22.870 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:22.870 CXX test/cpp_headers/init.o 00:03:22.870 CXX test/cpp_headers/ioat.o 00:03:22.870 LINK bdevperf 00:03:22.870 CC test/nvme/fdp/fdp.o 00:03:23.208 CC test/nvme/cuse/cuse.o 00:03:23.208 LINK vhost 00:03:23.208 LINK fused_ordering 00:03:23.208 CXX test/cpp_headers/ioat_spec.o 00:03:23.208 CXX test/cpp_headers/iscsi_spec.o 00:03:23.208 LINK doorbell_aers 00:03:23.208 LINK nvme_compliance 00:03:23.208 CXX test/cpp_headers/json.o 00:03:23.208 CXX test/cpp_headers/jsonrpc.o 00:03:23.208 CXX test/cpp_headers/keyring.o 00:03:23.467 CXX test/cpp_headers/keyring_module.o 00:03:23.467 LINK fdp 00:03:23.467 CC examples/nvmf/nvmf/nvmf.o 00:03:23.467 CC app/spdk_dd/spdk_dd.o 00:03:23.467 CXX test/cpp_headers/likely.o 00:03:23.467 CXX test/cpp_headers/log.o 00:03:23.467 CC app/fio/nvme/fio_plugin.o 00:03:23.467 CXX test/cpp_headers/lvol.o 00:03:23.467 CXX test/cpp_headers/md5.o 00:03:23.467 CXX test/cpp_headers/memory.o 00:03:23.467 CC app/fio/bdev/fio_plugin.o 00:03:23.726 CXX test/cpp_headers/mmio.o 00:03:23.726 CXX test/cpp_headers/nbd.o 00:03:23.726 LINK nvmf 00:03:23.726 CXX test/cpp_headers/net.o 00:03:23.726 CXX test/cpp_headers/notify.o 00:03:23.726 CXX test/cpp_headers/nvme.o 00:03:23.726 CXX test/cpp_headers/nvme_intel.o 00:03:23.985 LINK spdk_dd 00:03:23.985 CXX test/cpp_headers/nvme_ocssd.o 00:03:23.985 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:23.985 CXX test/cpp_headers/nvme_spec.o 00:03:23.985 CXX test/cpp_headers/nvme_zns.o 00:03:23.985 CXX test/cpp_headers/nvmf_cmd.o 00:03:23.985 LINK spdk_nvme 00:03:23.985 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:23.985 CXX test/cpp_headers/nvmf.o 00:03:23.985 LINK spdk_bdev 00:03:23.985 CXX test/cpp_headers/nvmf_spec.o 00:03:24.244 CXX test/cpp_headers/nvmf_transport.o 00:03:24.244 CXX test/cpp_headers/opal.o 00:03:24.244 CXX test/cpp_headers/opal_spec.o 00:03:24.244 CXX test/cpp_headers/pci_ids.o 00:03:24.244 CXX test/cpp_headers/pipe.o 00:03:24.244 CXX test/cpp_headers/queue.o 00:03:24.244 CXX test/cpp_headers/reduce.o 00:03:24.244 CXX test/cpp_headers/rpc.o 00:03:24.244 CXX test/cpp_headers/scheduler.o 00:03:24.244 CXX test/cpp_headers/scsi.o 00:03:24.244 CXX test/cpp_headers/scsi_spec.o 00:03:24.244 CXX test/cpp_headers/sock.o 00:03:24.504 CXX test/cpp_headers/stdinc.o 00:03:24.504 CXX test/cpp_headers/string.o 00:03:24.504 LINK cuse 00:03:24.504 CXX test/cpp_headers/thread.o 00:03:24.504 CXX test/cpp_headers/trace.o 00:03:24.504 CXX test/cpp_headers/trace_parser.o 00:03:24.504 CXX test/cpp_headers/tree.o 00:03:24.504 CXX test/cpp_headers/ublk.o 00:03:24.504 CXX test/cpp_headers/util.o 00:03:24.504 CXX test/cpp_headers/uuid.o 00:03:24.504 CXX test/cpp_headers/version.o 00:03:24.504 CXX test/cpp_headers/vfio_user_pci.o 00:03:24.504 CXX test/cpp_headers/vfio_user_spec.o 00:03:24.504 CXX test/cpp_headers/vhost.o 00:03:24.504 CXX test/cpp_headers/vmd.o 00:03:24.763 CXX test/cpp_headers/xor.o 00:03:24.763 CXX test/cpp_headers/zipf.o 00:03:25.022 LINK esnap 00:03:25.590 00:03:25.590 real 1m30.255s 00:03:25.590 user 8m15.784s 00:03:25.590 sys 1m44.309s 00:03:25.590 03:06:08 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:25.590 ************************************ 00:03:25.590 END TEST make 00:03:25.590 ************************************ 00:03:25.590 03:06:08 make -- common/autotest_common.sh@10 -- $ set +x 00:03:25.590 03:06:08 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:25.590 03:06:08 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:25.590 03:06:08 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:25.590 03:06:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.590 03:06:08 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:25.590 03:06:08 -- pm/common@44 -- $ pid=5233 00:03:25.590 03:06:08 -- pm/common@50 -- $ kill -TERM 5233 00:03:25.590 03:06:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.590 03:06:08 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:25.590 03:06:08 -- pm/common@44 -- $ pid=5234 00:03:25.590 03:06:08 -- pm/common@50 -- $ kill -TERM 5234 00:03:25.590 03:06:08 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:25.590 03:06:08 -- common/autotest_common.sh@1681 -- # lcov --version 00:03:25.590 03:06:08 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:25.850 03:06:08 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:25.850 03:06:08 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:25.850 03:06:08 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:25.850 03:06:08 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:25.850 03:06:08 -- scripts/common.sh@336 -- # IFS=.-: 00:03:25.850 03:06:08 -- scripts/common.sh@336 -- # read -ra ver1 00:03:25.850 03:06:08 -- scripts/common.sh@337 -- # IFS=.-: 00:03:25.850 03:06:08 -- scripts/common.sh@337 -- # read -ra ver2 00:03:25.850 03:06:08 -- scripts/common.sh@338 -- # local 'op=<' 00:03:25.850 03:06:08 -- scripts/common.sh@340 -- # ver1_l=2 00:03:25.850 03:06:08 -- scripts/common.sh@341 -- # ver2_l=1 00:03:25.850 03:06:08 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:25.850 03:06:08 -- scripts/common.sh@344 -- # case "$op" in 00:03:25.850 03:06:08 -- scripts/common.sh@345 -- # : 1 00:03:25.850 03:06:08 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:25.850 03:06:08 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:25.850 03:06:08 -- scripts/common.sh@365 -- # decimal 1 00:03:25.850 03:06:08 -- scripts/common.sh@353 -- # local d=1 00:03:25.850 03:06:08 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:25.850 03:06:08 -- scripts/common.sh@355 -- # echo 1 00:03:25.850 03:06:08 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:25.850 03:06:08 -- scripts/common.sh@366 -- # decimal 2 00:03:25.850 03:06:08 -- scripts/common.sh@353 -- # local d=2 00:03:25.850 03:06:08 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:25.850 03:06:08 -- scripts/common.sh@355 -- # echo 2 00:03:25.850 03:06:08 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:25.850 03:06:08 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:25.850 03:06:08 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:25.850 03:06:08 -- scripts/common.sh@368 -- # return 0 00:03:25.850 03:06:08 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:25.850 03:06:08 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:25.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.850 --rc genhtml_branch_coverage=1 00:03:25.850 --rc genhtml_function_coverage=1 00:03:25.850 --rc genhtml_legend=1 00:03:25.850 --rc geninfo_all_blocks=1 00:03:25.850 --rc geninfo_unexecuted_blocks=1 00:03:25.850 00:03:25.850 ' 00:03:25.850 03:06:08 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:25.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.850 --rc genhtml_branch_coverage=1 00:03:25.850 --rc genhtml_function_coverage=1 00:03:25.850 --rc genhtml_legend=1 00:03:25.850 --rc geninfo_all_blocks=1 00:03:25.850 --rc geninfo_unexecuted_blocks=1 00:03:25.850 00:03:25.850 ' 00:03:25.850 03:06:08 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:25.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.850 --rc genhtml_branch_coverage=1 00:03:25.850 --rc genhtml_function_coverage=1 00:03:25.850 --rc genhtml_legend=1 00:03:25.850 --rc geninfo_all_blocks=1 00:03:25.850 --rc geninfo_unexecuted_blocks=1 00:03:25.850 00:03:25.850 ' 00:03:25.850 03:06:08 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:25.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.850 --rc genhtml_branch_coverage=1 00:03:25.850 --rc genhtml_function_coverage=1 00:03:25.850 --rc genhtml_legend=1 00:03:25.850 --rc geninfo_all_blocks=1 00:03:25.850 --rc geninfo_unexecuted_blocks=1 00:03:25.850 00:03:25.850 ' 00:03:25.850 03:06:08 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:25.850 03:06:08 -- nvmf/common.sh@7 -- # uname -s 00:03:25.850 03:06:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:25.850 03:06:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:25.850 03:06:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:25.851 03:06:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:25.851 03:06:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:25.851 03:06:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:25.851 03:06:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:25.851 03:06:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:25.851 03:06:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:25.851 03:06:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:25.851 03:06:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:03:25.851 03:06:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:03:25.851 03:06:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:25.851 03:06:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:25.851 03:06:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:25.851 03:06:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:25.851 03:06:08 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:25.851 03:06:08 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:25.851 03:06:08 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:25.851 03:06:08 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:25.851 03:06:08 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:25.851 03:06:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.851 03:06:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.851 03:06:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.851 03:06:08 -- paths/export.sh@5 -- # export PATH 00:03:25.851 03:06:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.851 03:06:08 -- nvmf/common.sh@51 -- # : 0 00:03:25.851 03:06:08 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:25.851 03:06:08 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:25.851 03:06:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:25.851 03:06:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:25.851 03:06:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:25.851 03:06:08 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:25.851 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:25.851 03:06:08 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:25.851 03:06:08 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:25.851 03:06:08 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:25.851 03:06:09 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:25.851 03:06:09 -- spdk/autotest.sh@32 -- # uname -s 00:03:25.851 03:06:09 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:25.851 03:06:09 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:25.851 03:06:09 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:25.851 03:06:09 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:25.851 03:06:09 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:25.851 03:06:09 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:25.851 03:06:09 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:25.851 03:06:09 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:25.851 03:06:09 -- spdk/autotest.sh@48 -- # udevadm_pid=54341 00:03:25.851 03:06:09 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:25.851 03:06:09 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:25.851 03:06:09 -- pm/common@17 -- # local monitor 00:03:25.851 03:06:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.851 03:06:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.851 03:06:09 -- pm/common@25 -- # sleep 1 00:03:25.851 03:06:09 -- pm/common@21 -- # date +%s 00:03:25.851 03:06:09 -- pm/common@21 -- # date +%s 00:03:25.851 03:06:09 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728443169 00:03:25.851 03:06:09 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728443169 00:03:25.851 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728443169_collect-cpu-load.pm.log 00:03:25.851 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728443169_collect-vmstat.pm.log 00:03:26.788 03:06:10 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:26.788 03:06:10 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:26.788 03:06:10 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:26.788 03:06:10 -- common/autotest_common.sh@10 -- # set +x 00:03:26.788 03:06:10 -- spdk/autotest.sh@59 -- # create_test_list 00:03:26.788 03:06:10 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:26.788 03:06:10 -- common/autotest_common.sh@10 -- # set +x 00:03:27.048 03:06:10 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:27.048 03:06:10 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:27.048 03:06:10 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:27.048 03:06:10 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:27.048 03:06:10 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:27.048 03:06:10 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:27.048 03:06:10 -- common/autotest_common.sh@1455 -- # uname 00:03:27.048 03:06:10 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:27.048 03:06:10 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:27.048 03:06:10 -- common/autotest_common.sh@1475 -- # uname 00:03:27.048 03:06:10 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:27.048 03:06:10 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:27.048 03:06:10 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:27.048 lcov: LCOV version 1.15 00:03:27.048 03:06:10 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:45.135 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:45.135 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:00.027 03:06:41 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:00.027 03:06:41 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:00.027 03:06:41 -- common/autotest_common.sh@10 -- # set +x 00:04:00.027 03:06:41 -- spdk/autotest.sh@78 -- # rm -f 00:04:00.027 03:06:41 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:00.027 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.027 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:00.027 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:00.027 03:06:42 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:00.027 03:06:42 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:00.027 03:06:42 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:00.027 03:06:42 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:00.027 03:06:42 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:00.027 03:06:42 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:00.027 03:06:42 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:00.027 03:06:42 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:00.027 03:06:42 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:00.027 03:06:42 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:00.027 03:06:42 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:00.027 03:06:42 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:00.027 03:06:42 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:00.027 03:06:42 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:00.027 03:06:42 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:00.027 03:06:42 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:04:00.027 03:06:42 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:04:00.027 03:06:42 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:00.027 03:06:42 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:00.027 03:06:42 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:00.027 03:06:42 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:04:00.027 03:06:42 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:04:00.027 03:06:42 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:00.027 03:06:42 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:00.027 03:06:42 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:00.027 03:06:42 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:00.027 03:06:42 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:00.027 03:06:42 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:00.028 03:06:42 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:00.028 03:06:42 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:00.028 No valid GPT data, bailing 00:04:00.028 03:06:42 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:00.028 03:06:42 -- scripts/common.sh@394 -- # pt= 00:04:00.028 03:06:42 -- scripts/common.sh@395 -- # return 1 00:04:00.028 03:06:42 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:00.028 1+0 records in 00:04:00.028 1+0 records out 00:04:00.028 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00456101 s, 230 MB/s 00:04:00.028 03:06:42 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:00.028 03:06:42 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:00.028 03:06:42 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:00.028 03:06:42 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:00.028 03:06:42 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:00.028 No valid GPT data, bailing 00:04:00.028 03:06:42 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:00.028 03:06:42 -- scripts/common.sh@394 -- # pt= 00:04:00.028 03:06:42 -- scripts/common.sh@395 -- # return 1 00:04:00.028 03:06:42 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:00.028 1+0 records in 00:04:00.028 1+0 records out 00:04:00.028 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00360137 s, 291 MB/s 00:04:00.028 03:06:42 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:00.028 03:06:42 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:00.028 03:06:42 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:00.028 03:06:42 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:00.028 03:06:42 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:00.028 No valid GPT data, bailing 00:04:00.028 03:06:42 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:00.028 03:06:42 -- scripts/common.sh@394 -- # pt= 00:04:00.028 03:06:42 -- scripts/common.sh@395 -- # return 1 00:04:00.028 03:06:42 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:00.028 1+0 records in 00:04:00.028 1+0 records out 00:04:00.028 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00360851 s, 291 MB/s 00:04:00.028 03:06:42 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:00.028 03:06:42 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:00.028 03:06:42 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:00.028 03:06:42 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:00.028 03:06:42 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:00.028 No valid GPT data, bailing 00:04:00.028 03:06:42 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:00.028 03:06:42 -- scripts/common.sh@394 -- # pt= 00:04:00.028 03:06:42 -- scripts/common.sh@395 -- # return 1 00:04:00.028 03:06:42 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:00.028 1+0 records in 00:04:00.028 1+0 records out 00:04:00.028 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00474986 s, 221 MB/s 00:04:00.028 03:06:42 -- spdk/autotest.sh@105 -- # sync 00:04:00.028 03:06:42 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:00.028 03:06:42 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:00.028 03:06:42 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:01.405 03:06:44 -- spdk/autotest.sh@111 -- # uname -s 00:04:01.405 03:06:44 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:01.405 03:06:44 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:01.405 03:06:44 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:01.972 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.972 Hugepages 00:04:01.972 node hugesize free / total 00:04:01.972 node0 1048576kB 0 / 0 00:04:01.972 node0 2048kB 0 / 0 00:04:01.972 00:04:01.972 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:01.972 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:01.972 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:02.231 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:02.231 03:06:45 -- spdk/autotest.sh@117 -- # uname -s 00:04:02.231 03:06:45 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:02.231 03:06:45 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:02.231 03:06:45 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:02.798 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.798 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.055 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.055 03:06:46 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:03.991 03:06:47 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:03.991 03:06:47 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:03.991 03:06:47 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:03.991 03:06:47 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:03.991 03:06:47 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:03.991 03:06:47 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:03.992 03:06:47 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:03.992 03:06:47 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:03.992 03:06:47 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:03.992 03:06:47 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:03.992 03:06:47 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:03.992 03:06:47 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:04.250 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.509 Waiting for block devices as requested 00:04:04.509 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:04.509 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:04.509 03:06:47 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:04.509 03:06:47 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:04.509 03:06:47 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:04.509 03:06:47 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:04.509 03:06:47 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:04.509 03:06:47 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:04.509 03:06:47 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:04.509 03:06:47 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:04.509 03:06:47 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:04.509 03:06:47 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:04.509 03:06:47 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:04.509 03:06:47 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:04.509 03:06:47 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:04.509 03:06:47 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:04.510 03:06:47 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:04.510 03:06:47 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:04.510 03:06:47 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:04.510 03:06:47 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:04.769 03:06:47 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:04.769 03:06:47 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:04.769 03:06:47 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:04.769 03:06:47 -- common/autotest_common.sh@1541 -- # continue 00:04:04.769 03:06:47 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:04.769 03:06:47 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:04.769 03:06:47 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:04.769 03:06:47 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:04.769 03:06:47 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:04.769 03:06:47 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:04.769 03:06:47 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:04.769 03:06:47 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:04.769 03:06:47 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:04.769 03:06:47 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:04.769 03:06:47 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:04.769 03:06:47 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:04.769 03:06:47 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:04.769 03:06:47 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:04.769 03:06:47 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:04.769 03:06:47 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:04.769 03:06:47 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:04.769 03:06:47 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:04.769 03:06:47 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:04.769 03:06:47 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:04.769 03:06:47 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:04.769 03:06:47 -- common/autotest_common.sh@1541 -- # continue 00:04:04.769 03:06:47 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:04.769 03:06:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:04.769 03:06:47 -- common/autotest_common.sh@10 -- # set +x 00:04:04.769 03:06:47 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:04.769 03:06:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:04.769 03:06:47 -- common/autotest_common.sh@10 -- # set +x 00:04:04.769 03:06:47 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:05.337 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:05.337 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:05.595 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:05.595 03:06:48 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:05.595 03:06:48 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:05.596 03:06:48 -- common/autotest_common.sh@10 -- # set +x 00:04:05.596 03:06:48 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:05.596 03:06:48 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:05.596 03:06:48 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:05.596 03:06:48 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:05.596 03:06:48 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:05.596 03:06:48 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:05.596 03:06:48 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:05.596 03:06:48 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:05.596 03:06:48 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:05.596 03:06:48 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:05.596 03:06:48 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:05.596 03:06:48 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:05.596 03:06:48 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:05.596 03:06:48 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:05.596 03:06:48 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:05.596 03:06:48 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:05.596 03:06:48 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:05.596 03:06:48 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:05.596 03:06:48 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:05.596 03:06:48 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:05.596 03:06:48 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:05.596 03:06:48 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:05.596 03:06:48 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:05.596 03:06:48 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:05.596 03:06:48 -- common/autotest_common.sh@1570 -- # return 0 00:04:05.596 03:06:48 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:05.596 03:06:48 -- common/autotest_common.sh@1578 -- # return 0 00:04:05.596 03:06:48 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:05.596 03:06:48 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:05.596 03:06:48 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:05.596 03:06:48 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:05.596 03:06:48 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:05.596 03:06:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:05.596 03:06:48 -- common/autotest_common.sh@10 -- # set +x 00:04:05.596 03:06:48 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:04:05.596 03:06:48 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:05.596 03:06:48 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:05.596 03:06:48 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:05.596 03:06:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.596 03:06:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.596 03:06:48 -- common/autotest_common.sh@10 -- # set +x 00:04:05.596 ************************************ 00:04:05.596 START TEST env 00:04:05.596 ************************************ 00:04:05.596 03:06:48 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:05.855 * Looking for test storage... 00:04:05.855 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:05.855 03:06:48 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:05.855 03:06:48 env -- common/autotest_common.sh@1681 -- # lcov --version 00:04:05.855 03:06:48 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:05.855 03:06:48 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:05.855 03:06:48 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:05.855 03:06:48 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:05.855 03:06:48 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:05.855 03:06:48 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.855 03:06:48 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:05.855 03:06:48 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:05.855 03:06:48 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:05.855 03:06:48 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:05.855 03:06:48 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:05.855 03:06:48 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:05.855 03:06:48 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:05.855 03:06:48 env -- scripts/common.sh@344 -- # case "$op" in 00:04:05.855 03:06:48 env -- scripts/common.sh@345 -- # : 1 00:04:05.855 03:06:48 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:05.855 03:06:48 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.855 03:06:48 env -- scripts/common.sh@365 -- # decimal 1 00:04:05.855 03:06:49 env -- scripts/common.sh@353 -- # local d=1 00:04:05.855 03:06:49 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.855 03:06:49 env -- scripts/common.sh@355 -- # echo 1 00:04:05.855 03:06:49 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:05.855 03:06:49 env -- scripts/common.sh@366 -- # decimal 2 00:04:05.855 03:06:49 env -- scripts/common.sh@353 -- # local d=2 00:04:05.855 03:06:49 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.855 03:06:49 env -- scripts/common.sh@355 -- # echo 2 00:04:05.855 03:06:49 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:05.855 03:06:49 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:05.855 03:06:49 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:05.855 03:06:49 env -- scripts/common.sh@368 -- # return 0 00:04:05.855 03:06:49 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.855 03:06:49 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:05.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.855 --rc genhtml_branch_coverage=1 00:04:05.855 --rc genhtml_function_coverage=1 00:04:05.855 --rc genhtml_legend=1 00:04:05.855 --rc geninfo_all_blocks=1 00:04:05.855 --rc geninfo_unexecuted_blocks=1 00:04:05.855 00:04:05.855 ' 00:04:05.855 03:06:49 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:05.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.855 --rc genhtml_branch_coverage=1 00:04:05.855 --rc genhtml_function_coverage=1 00:04:05.855 --rc genhtml_legend=1 00:04:05.855 --rc geninfo_all_blocks=1 00:04:05.855 --rc geninfo_unexecuted_blocks=1 00:04:05.855 00:04:05.855 ' 00:04:05.855 03:06:49 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:05.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.855 --rc genhtml_branch_coverage=1 00:04:05.855 --rc genhtml_function_coverage=1 00:04:05.855 --rc genhtml_legend=1 00:04:05.855 --rc geninfo_all_blocks=1 00:04:05.855 --rc geninfo_unexecuted_blocks=1 00:04:05.855 00:04:05.855 ' 00:04:05.855 03:06:49 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:05.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.855 --rc genhtml_branch_coverage=1 00:04:05.855 --rc genhtml_function_coverage=1 00:04:05.855 --rc genhtml_legend=1 00:04:05.855 --rc geninfo_all_blocks=1 00:04:05.855 --rc geninfo_unexecuted_blocks=1 00:04:05.855 00:04:05.855 ' 00:04:05.855 03:06:49 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:05.855 03:06:49 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.855 03:06:49 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.855 03:06:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.855 ************************************ 00:04:05.855 START TEST env_memory 00:04:05.855 ************************************ 00:04:05.855 03:06:49 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:05.855 00:04:05.855 00:04:05.855 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.855 http://cunit.sourceforge.net/ 00:04:05.855 00:04:05.855 00:04:05.855 Suite: memory 00:04:05.855 Test: alloc and free memory map ...[2024-10-09 03:06:49.060672] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:05.855 passed 00:04:05.855 Test: mem map translation ...[2024-10-09 03:06:49.083856] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:05.855 [2024-10-09 03:06:49.083899] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:05.855 [2024-10-09 03:06:49.083938] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:05.855 [2024-10-09 03:06:49.083946] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:05.855 passed 00:04:05.855 Test: mem map registration ...[2024-10-09 03:06:49.132351] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:05.855 [2024-10-09 03:06:49.132372] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:05.855 passed 00:04:06.115 Test: mem map adjacent registrations ...passed 00:04:06.115 00:04:06.115 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.115 suites 1 1 n/a 0 0 00:04:06.115 tests 4 4 4 0 0 00:04:06.115 asserts 152 152 152 0 n/a 00:04:06.115 00:04:06.115 Elapsed time = 0.161 seconds 00:04:06.115 00:04:06.115 real 0m0.176s 00:04:06.115 user 0m0.162s 00:04:06.115 sys 0m0.011s 00:04:06.115 03:06:49 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.115 03:06:49 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:06.115 ************************************ 00:04:06.115 END TEST env_memory 00:04:06.115 ************************************ 00:04:06.115 03:06:49 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:06.115 03:06:49 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.115 03:06:49 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.115 03:06:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.115 ************************************ 00:04:06.115 START TEST env_vtophys 00:04:06.115 ************************************ 00:04:06.115 03:06:49 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:06.115 EAL: lib.eal log level changed from notice to debug 00:04:06.115 EAL: Detected lcore 0 as core 0 on socket 0 00:04:06.115 EAL: Detected lcore 1 as core 0 on socket 0 00:04:06.115 EAL: Detected lcore 2 as core 0 on socket 0 00:04:06.115 EAL: Detected lcore 3 as core 0 on socket 0 00:04:06.115 EAL: Detected lcore 4 as core 0 on socket 0 00:04:06.115 EAL: Detected lcore 5 as core 0 on socket 0 00:04:06.115 EAL: Detected lcore 6 as core 0 on socket 0 00:04:06.115 EAL: Detected lcore 7 as core 0 on socket 0 00:04:06.115 EAL: Detected lcore 8 as core 0 on socket 0 00:04:06.115 EAL: Detected lcore 9 as core 0 on socket 0 00:04:06.115 EAL: Maximum logical cores by configuration: 128 00:04:06.115 EAL: Detected CPU lcores: 10 00:04:06.115 EAL: Detected NUMA nodes: 1 00:04:06.115 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:06.115 EAL: Detected shared linkage of DPDK 00:04:06.115 EAL: No shared files mode enabled, IPC will be disabled 00:04:06.115 EAL: Selected IOVA mode 'PA' 00:04:06.115 EAL: Probing VFIO support... 00:04:06.115 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:06.115 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:06.115 EAL: Ask a virtual area of 0x2e000 bytes 00:04:06.115 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:06.115 EAL: Setting up physically contiguous memory... 00:04:06.115 EAL: Setting maximum number of open files to 524288 00:04:06.115 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:06.115 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:06.115 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.115 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:06.115 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:06.115 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.115 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:06.115 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:06.115 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.115 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:06.115 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:06.115 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.115 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:06.115 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:06.115 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.115 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:06.115 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:06.115 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.115 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:06.115 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:06.115 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.115 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:06.115 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:06.115 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.115 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:06.115 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:06.115 EAL: Hugepages will be freed exactly as allocated. 00:04:06.115 EAL: No shared files mode enabled, IPC is disabled 00:04:06.115 EAL: No shared files mode enabled, IPC is disabled 00:04:06.115 EAL: TSC frequency is ~2200000 KHz 00:04:06.115 EAL: Main lcore 0 is ready (tid=7f397b24aa00;cpuset=[0]) 00:04:06.115 EAL: Trying to obtain current memory policy. 00:04:06.115 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.115 EAL: Restoring previous memory policy: 0 00:04:06.115 EAL: request: mp_malloc_sync 00:04:06.115 EAL: No shared files mode enabled, IPC is disabled 00:04:06.115 EAL: Heap on socket 0 was expanded by 2MB 00:04:06.115 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:06.115 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:06.115 EAL: Mem event callback 'spdk:(nil)' registered 00:04:06.115 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:06.115 00:04:06.115 00:04:06.115 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.115 http://cunit.sourceforge.net/ 00:04:06.115 00:04:06.115 00:04:06.115 Suite: components_suite 00:04:06.115 Test: vtophys_malloc_test ...passed 00:04:06.115 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:06.115 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.374 EAL: Restoring previous memory policy: 4 00:04:06.374 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.374 EAL: request: mp_malloc_sync 00:04:06.374 EAL: No shared files mode enabled, IPC is disabled 00:04:06.374 EAL: Heap on socket 0 was expanded by 4MB 00:04:06.374 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.374 EAL: request: mp_malloc_sync 00:04:06.374 EAL: No shared files mode enabled, IPC is disabled 00:04:06.374 EAL: Heap on socket 0 was shrunk by 4MB 00:04:06.374 EAL: Trying to obtain current memory policy. 00:04:06.374 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.374 EAL: Restoring previous memory policy: 4 00:04:06.374 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.374 EAL: request: mp_malloc_sync 00:04:06.374 EAL: No shared files mode enabled, IPC is disabled 00:04:06.374 EAL: Heap on socket 0 was expanded by 6MB 00:04:06.374 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.374 EAL: request: mp_malloc_sync 00:04:06.374 EAL: No shared files mode enabled, IPC is disabled 00:04:06.374 EAL: Heap on socket 0 was shrunk by 6MB 00:04:06.374 EAL: Trying to obtain current memory policy. 00:04:06.375 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.375 EAL: Restoring previous memory policy: 4 00:04:06.375 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.375 EAL: request: mp_malloc_sync 00:04:06.375 EAL: No shared files mode enabled, IPC is disabled 00:04:06.375 EAL: Heap on socket 0 was expanded by 10MB 00:04:06.375 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.375 EAL: request: mp_malloc_sync 00:04:06.375 EAL: No shared files mode enabled, IPC is disabled 00:04:06.375 EAL: Heap on socket 0 was shrunk by 10MB 00:04:06.375 EAL: Trying to obtain current memory policy. 00:04:06.375 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.375 EAL: Restoring previous memory policy: 4 00:04:06.375 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.375 EAL: request: mp_malloc_sync 00:04:06.375 EAL: No shared files mode enabled, IPC is disabled 00:04:06.375 EAL: Heap on socket 0 was expanded by 18MB 00:04:06.375 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.375 EAL: request: mp_malloc_sync 00:04:06.375 EAL: No shared files mode enabled, IPC is disabled 00:04:06.375 EAL: Heap on socket 0 was shrunk by 18MB 00:04:06.375 EAL: Trying to obtain current memory policy. 00:04:06.375 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.375 EAL: Restoring previous memory policy: 4 00:04:06.375 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.375 EAL: request: mp_malloc_sync 00:04:06.375 EAL: No shared files mode enabled, IPC is disabled 00:04:06.375 EAL: Heap on socket 0 was expanded by 34MB 00:04:06.375 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.375 EAL: request: mp_malloc_sync 00:04:06.375 EAL: No shared files mode enabled, IPC is disabled 00:04:06.375 EAL: Heap on socket 0 was shrunk by 34MB 00:04:06.375 EAL: Trying to obtain current memory policy. 00:04:06.375 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.375 EAL: Restoring previous memory policy: 4 00:04:06.375 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.375 EAL: request: mp_malloc_sync 00:04:06.375 EAL: No shared files mode enabled, IPC is disabled 00:04:06.375 EAL: Heap on socket 0 was expanded by 66MB 00:04:06.375 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.375 EAL: request: mp_malloc_sync 00:04:06.375 EAL: No shared files mode enabled, IPC is disabled 00:04:06.375 EAL: Heap on socket 0 was shrunk by 66MB 00:04:06.375 EAL: Trying to obtain current memory policy. 00:04:06.375 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.375 EAL: Restoring previous memory policy: 4 00:04:06.375 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.375 EAL: request: mp_malloc_sync 00:04:06.375 EAL: No shared files mode enabled, IPC is disabled 00:04:06.375 EAL: Heap on socket 0 was expanded by 130MB 00:04:06.375 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.375 EAL: request: mp_malloc_sync 00:04:06.375 EAL: No shared files mode enabled, IPC is disabled 00:04:06.375 EAL: Heap on socket 0 was shrunk by 130MB 00:04:06.375 EAL: Trying to obtain current memory policy. 00:04:06.375 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.375 EAL: Restoring previous memory policy: 4 00:04:06.375 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.375 EAL: request: mp_malloc_sync 00:04:06.375 EAL: No shared files mode enabled, IPC is disabled 00:04:06.375 EAL: Heap on socket 0 was expanded by 258MB 00:04:06.375 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.634 EAL: request: mp_malloc_sync 00:04:06.634 EAL: No shared files mode enabled, IPC is disabled 00:04:06.634 EAL: Heap on socket 0 was shrunk by 258MB 00:04:06.634 EAL: Trying to obtain current memory policy. 00:04:06.634 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.634 EAL: Restoring previous memory policy: 4 00:04:06.634 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.634 EAL: request: mp_malloc_sync 00:04:06.634 EAL: No shared files mode enabled, IPC is disabled 00:04:06.634 EAL: Heap on socket 0 was expanded by 514MB 00:04:06.892 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.892 EAL: request: mp_malloc_sync 00:04:06.892 EAL: No shared files mode enabled, IPC is disabled 00:04:06.892 EAL: Heap on socket 0 was shrunk by 514MB 00:04:06.892 EAL: Trying to obtain current memory policy. 00:04:06.892 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.151 EAL: Restoring previous memory policy: 4 00:04:07.151 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.151 EAL: request: mp_malloc_sync 00:04:07.151 EAL: No shared files mode enabled, IPC is disabled 00:04:07.151 EAL: Heap on socket 0 was expanded by 1026MB 00:04:07.410 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.669 passed 00:04:07.669 00:04:07.669 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.669 suites 1 1 n/a 0 0 00:04:07.669 tests 2 2 2 0 0 00:04:07.669 asserts 5400 5400 5400 0 n/a 00:04:07.669 00:04:07.669 Elapsed time = 1.303 seconds 00:04:07.669 EAL: request: mp_malloc_sync 00:04:07.669 EAL: No shared files mode enabled, IPC is disabled 00:04:07.669 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:07.669 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.669 EAL: request: mp_malloc_sync 00:04:07.669 EAL: No shared files mode enabled, IPC is disabled 00:04:07.669 EAL: Heap on socket 0 was shrunk by 2MB 00:04:07.669 EAL: No shared files mode enabled, IPC is disabled 00:04:07.669 EAL: No shared files mode enabled, IPC is disabled 00:04:07.669 EAL: No shared files mode enabled, IPC is disabled 00:04:07.669 00:04:07.669 real 0m1.505s 00:04:07.669 user 0m0.832s 00:04:07.669 sys 0m0.540s 00:04:07.669 03:06:50 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.669 03:06:50 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:07.669 ************************************ 00:04:07.669 END TEST env_vtophys 00:04:07.669 ************************************ 00:04:07.669 03:06:50 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:07.669 03:06:50 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.669 03:06:50 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.669 03:06:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.669 ************************************ 00:04:07.669 START TEST env_pci 00:04:07.669 ************************************ 00:04:07.669 03:06:50 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:07.669 00:04:07.669 00:04:07.669 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.669 http://cunit.sourceforge.net/ 00:04:07.669 00:04:07.669 00:04:07.669 Suite: pci 00:04:07.669 Test: pci_hook ...[2024-10-09 03:06:50.827993] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56560 has claimed it 00:04:07.669 passed 00:04:07.669 00:04:07.669 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.669 suites 1 1 n/a 0 0 00:04:07.669 tests 1 1 1 0 0 00:04:07.669 asserts 25 25 25 0 n/a 00:04:07.669 00:04:07.669 Elapsed time = 0.002 seconds 00:04:07.669 EAL: Cannot find device (10000:00:01.0) 00:04:07.669 EAL: Failed to attach device on primary process 00:04:07.669 00:04:07.669 real 0m0.021s 00:04:07.669 user 0m0.008s 00:04:07.669 sys 0m0.013s 00:04:07.669 03:06:50 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.669 03:06:50 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:07.669 ************************************ 00:04:07.669 END TEST env_pci 00:04:07.669 ************************************ 00:04:07.669 03:06:50 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:07.669 03:06:50 env -- env/env.sh@15 -- # uname 00:04:07.669 03:06:50 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:07.669 03:06:50 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:07.669 03:06:50 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:07.669 03:06:50 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:07.669 03:06:50 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.669 03:06:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.669 ************************************ 00:04:07.669 START TEST env_dpdk_post_init 00:04:07.669 ************************************ 00:04:07.669 03:06:50 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:07.669 EAL: Detected CPU lcores: 10 00:04:07.669 EAL: Detected NUMA nodes: 1 00:04:07.669 EAL: Detected shared linkage of DPDK 00:04:07.669 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:07.669 EAL: Selected IOVA mode 'PA' 00:04:07.928 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:07.928 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:07.928 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:07.928 Starting DPDK initialization... 00:04:07.928 Starting SPDK post initialization... 00:04:07.928 SPDK NVMe probe 00:04:07.928 Attaching to 0000:00:10.0 00:04:07.928 Attaching to 0000:00:11.0 00:04:07.928 Attached to 0000:00:10.0 00:04:07.928 Attached to 0000:00:11.0 00:04:07.928 Cleaning up... 00:04:07.928 00:04:07.928 real 0m0.183s 00:04:07.928 user 0m0.051s 00:04:07.928 sys 0m0.032s 00:04:07.928 03:06:51 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.928 03:06:51 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:07.928 ************************************ 00:04:07.928 END TEST env_dpdk_post_init 00:04:07.928 ************************************ 00:04:07.928 03:06:51 env -- env/env.sh@26 -- # uname 00:04:07.928 03:06:51 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:07.928 03:06:51 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:07.928 03:06:51 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.928 03:06:51 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.928 03:06:51 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.928 ************************************ 00:04:07.928 START TEST env_mem_callbacks 00:04:07.928 ************************************ 00:04:07.928 03:06:51 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:07.928 EAL: Detected CPU lcores: 10 00:04:07.928 EAL: Detected NUMA nodes: 1 00:04:07.929 EAL: Detected shared linkage of DPDK 00:04:07.929 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:07.929 EAL: Selected IOVA mode 'PA' 00:04:08.188 00:04:08.188 00:04:08.188 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.188 http://cunit.sourceforge.net/ 00:04:08.188 00:04:08.188 00:04:08.188 Suite: memory 00:04:08.188 Test: test ... 00:04:08.188 register 0x200000200000 2097152 00:04:08.188 malloc 3145728 00:04:08.188 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:08.188 register 0x200000400000 4194304 00:04:08.188 buf 0x200000500000 len 3145728 PASSED 00:04:08.188 malloc 64 00:04:08.188 buf 0x2000004fff40 len 64 PASSED 00:04:08.188 malloc 4194304 00:04:08.188 register 0x200000800000 6291456 00:04:08.188 buf 0x200000a00000 len 4194304 PASSED 00:04:08.188 free 0x200000500000 3145728 00:04:08.188 free 0x2000004fff40 64 00:04:08.188 unregister 0x200000400000 4194304 PASSED 00:04:08.188 free 0x200000a00000 4194304 00:04:08.188 unregister 0x200000800000 6291456 PASSED 00:04:08.188 malloc 8388608 00:04:08.188 register 0x200000400000 10485760 00:04:08.188 buf 0x200000600000 len 8388608 PASSED 00:04:08.188 free 0x200000600000 8388608 00:04:08.188 unregister 0x200000400000 10485760 PASSED 00:04:08.188 passed 00:04:08.188 00:04:08.188 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.188 suites 1 1 n/a 0 0 00:04:08.188 tests 1 1 1 0 0 00:04:08.188 asserts 15 15 15 0 n/a 00:04:08.188 00:04:08.188 Elapsed time = 0.009 seconds 00:04:08.188 00:04:08.188 real 0m0.144s 00:04:08.188 user 0m0.022s 00:04:08.188 sys 0m0.021s 00:04:08.188 03:06:51 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.188 03:06:51 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:08.188 ************************************ 00:04:08.188 END TEST env_mem_callbacks 00:04:08.188 ************************************ 00:04:08.188 ************************************ 00:04:08.188 END TEST env 00:04:08.188 ************************************ 00:04:08.188 00:04:08.188 real 0m2.479s 00:04:08.188 user 0m1.272s 00:04:08.188 sys 0m0.865s 00:04:08.188 03:06:51 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.188 03:06:51 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.188 03:06:51 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:08.188 03:06:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.188 03:06:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.188 03:06:51 -- common/autotest_common.sh@10 -- # set +x 00:04:08.188 ************************************ 00:04:08.188 START TEST rpc 00:04:08.188 ************************************ 00:04:08.188 03:06:51 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:08.188 * Looking for test storage... 00:04:08.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:08.188 03:06:51 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:08.188 03:06:51 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:08.188 03:06:51 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:08.448 03:06:51 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:08.448 03:06:51 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.448 03:06:51 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.448 03:06:51 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.448 03:06:51 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.448 03:06:51 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.448 03:06:51 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.448 03:06:51 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.448 03:06:51 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.448 03:06:51 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.448 03:06:51 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.448 03:06:51 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.448 03:06:51 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:08.448 03:06:51 rpc -- scripts/common.sh@345 -- # : 1 00:04:08.448 03:06:51 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.448 03:06:51 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.448 03:06:51 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:08.448 03:06:51 rpc -- scripts/common.sh@353 -- # local d=1 00:04:08.448 03:06:51 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.448 03:06:51 rpc -- scripts/common.sh@355 -- # echo 1 00:04:08.448 03:06:51 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.448 03:06:51 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:08.448 03:06:51 rpc -- scripts/common.sh@353 -- # local d=2 00:04:08.448 03:06:51 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.448 03:06:51 rpc -- scripts/common.sh@355 -- # echo 2 00:04:08.448 03:06:51 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.448 03:06:51 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.448 03:06:51 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.448 03:06:51 rpc -- scripts/common.sh@368 -- # return 0 00:04:08.448 03:06:51 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.448 03:06:51 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:08.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.448 --rc genhtml_branch_coverage=1 00:04:08.448 --rc genhtml_function_coverage=1 00:04:08.448 --rc genhtml_legend=1 00:04:08.448 --rc geninfo_all_blocks=1 00:04:08.448 --rc geninfo_unexecuted_blocks=1 00:04:08.448 00:04:08.448 ' 00:04:08.448 03:06:51 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:08.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.448 --rc genhtml_branch_coverage=1 00:04:08.448 --rc genhtml_function_coverage=1 00:04:08.448 --rc genhtml_legend=1 00:04:08.448 --rc geninfo_all_blocks=1 00:04:08.448 --rc geninfo_unexecuted_blocks=1 00:04:08.448 00:04:08.448 ' 00:04:08.448 03:06:51 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:08.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.448 --rc genhtml_branch_coverage=1 00:04:08.448 --rc genhtml_function_coverage=1 00:04:08.448 --rc genhtml_legend=1 00:04:08.448 --rc geninfo_all_blocks=1 00:04:08.448 --rc geninfo_unexecuted_blocks=1 00:04:08.448 00:04:08.448 ' 00:04:08.448 03:06:51 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:08.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.448 --rc genhtml_branch_coverage=1 00:04:08.448 --rc genhtml_function_coverage=1 00:04:08.448 --rc genhtml_legend=1 00:04:08.448 --rc geninfo_all_blocks=1 00:04:08.448 --rc geninfo_unexecuted_blocks=1 00:04:08.448 00:04:08.448 ' 00:04:08.448 03:06:51 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56683 00:04:08.448 03:06:51 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:08.448 03:06:51 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:08.448 03:06:51 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56683 00:04:08.448 03:06:51 rpc -- common/autotest_common.sh@831 -- # '[' -z 56683 ']' 00:04:08.448 03:06:51 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:08.448 03:06:51 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:08.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:08.448 03:06:51 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:08.448 03:06:51 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:08.448 03:06:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.448 [2024-10-09 03:06:51.606324] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:04:08.448 [2024-10-09 03:06:51.606478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56683 ] 00:04:08.448 [2024-10-09 03:06:51.739720] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.707 [2024-10-09 03:06:51.845857] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:08.707 [2024-10-09 03:06:51.845964] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56683' to capture a snapshot of events at runtime. 00:04:08.707 [2024-10-09 03:06:51.845977] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:08.707 [2024-10-09 03:06:51.845986] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:08.707 [2024-10-09 03:06:51.846004] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56683 for offline analysis/debug. 00:04:08.707 [2024-10-09 03:06:51.846493] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.707 [2024-10-09 03:06:51.920123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:08.966 03:06:52 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:08.966 03:06:52 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:08.966 03:06:52 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:08.966 03:06:52 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:08.966 03:06:52 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:08.966 03:06:52 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:08.966 03:06:52 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.966 03:06:52 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.966 03:06:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.966 ************************************ 00:04:08.966 START TEST rpc_integrity 00:04:08.966 ************************************ 00:04:08.966 03:06:52 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:08.966 03:06:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:08.966 03:06:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.966 03:06:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.966 03:06:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.966 03:06:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:08.966 03:06:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:08.966 03:06:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:08.966 03:06:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:08.966 03:06:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.966 03:06:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.966 03:06:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.966 03:06:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:08.966 03:06:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:08.966 03:06:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.966 03:06:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.966 03:06:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.966 03:06:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:08.966 { 00:04:08.966 "name": "Malloc0", 00:04:08.966 "aliases": [ 00:04:08.966 "e1b9dda8-df47-4097-bb28-2004ffc1cd65" 00:04:08.966 ], 00:04:08.966 "product_name": "Malloc disk", 00:04:08.966 "block_size": 512, 00:04:08.966 "num_blocks": 16384, 00:04:08.966 "uuid": "e1b9dda8-df47-4097-bb28-2004ffc1cd65", 00:04:08.966 "assigned_rate_limits": { 00:04:08.966 "rw_ios_per_sec": 0, 00:04:08.966 "rw_mbytes_per_sec": 0, 00:04:08.966 "r_mbytes_per_sec": 0, 00:04:08.966 "w_mbytes_per_sec": 0 00:04:08.966 }, 00:04:08.966 "claimed": false, 00:04:08.966 "zoned": false, 00:04:08.966 "supported_io_types": { 00:04:08.966 "read": true, 00:04:08.966 "write": true, 00:04:08.966 "unmap": true, 00:04:08.966 "flush": true, 00:04:08.966 "reset": true, 00:04:08.966 "nvme_admin": false, 00:04:08.966 "nvme_io": false, 00:04:08.966 "nvme_io_md": false, 00:04:08.966 "write_zeroes": true, 00:04:08.966 "zcopy": true, 00:04:08.966 "get_zone_info": false, 00:04:08.966 "zone_management": false, 00:04:08.966 "zone_append": false, 00:04:08.966 "compare": false, 00:04:08.966 "compare_and_write": false, 00:04:08.966 "abort": true, 00:04:08.966 "seek_hole": false, 00:04:08.966 "seek_data": false, 00:04:08.966 "copy": true, 00:04:08.966 "nvme_iov_md": false 00:04:08.966 }, 00:04:08.966 "memory_domains": [ 00:04:08.966 { 00:04:08.966 "dma_device_id": "system", 00:04:08.966 "dma_device_type": 1 00:04:08.966 }, 00:04:08.966 { 00:04:08.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.966 "dma_device_type": 2 00:04:08.966 } 00:04:08.966 ], 00:04:08.966 "driver_specific": {} 00:04:08.966 } 00:04:08.966 ]' 00:04:08.966 03:06:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:09.226 03:06:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:09.226 03:06:52 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:09.226 03:06:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.226 03:06:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.226 [2024-10-09 03:06:52.288848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:09.226 [2024-10-09 03:06:52.288929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:09.226 [2024-10-09 03:06:52.288945] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x145b120 00:04:09.226 [2024-10-09 03:06:52.288954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:09.226 [2024-10-09 03:06:52.290509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:09.226 [2024-10-09 03:06:52.290563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:09.226 Passthru0 00:04:09.226 03:06:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.226 03:06:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:09.226 03:06:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.226 03:06:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.226 03:06:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.226 03:06:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:09.226 { 00:04:09.226 "name": "Malloc0", 00:04:09.226 "aliases": [ 00:04:09.226 "e1b9dda8-df47-4097-bb28-2004ffc1cd65" 00:04:09.226 ], 00:04:09.226 "product_name": "Malloc disk", 00:04:09.226 "block_size": 512, 00:04:09.226 "num_blocks": 16384, 00:04:09.226 "uuid": "e1b9dda8-df47-4097-bb28-2004ffc1cd65", 00:04:09.226 "assigned_rate_limits": { 00:04:09.226 "rw_ios_per_sec": 0, 00:04:09.226 "rw_mbytes_per_sec": 0, 00:04:09.226 "r_mbytes_per_sec": 0, 00:04:09.226 "w_mbytes_per_sec": 0 00:04:09.226 }, 00:04:09.226 "claimed": true, 00:04:09.226 "claim_type": "exclusive_write", 00:04:09.226 "zoned": false, 00:04:09.226 "supported_io_types": { 00:04:09.226 "read": true, 00:04:09.226 "write": true, 00:04:09.226 "unmap": true, 00:04:09.226 "flush": true, 00:04:09.226 "reset": true, 00:04:09.226 "nvme_admin": false, 00:04:09.226 "nvme_io": false, 00:04:09.226 "nvme_io_md": false, 00:04:09.226 "write_zeroes": true, 00:04:09.226 "zcopy": true, 00:04:09.226 "get_zone_info": false, 00:04:09.226 "zone_management": false, 00:04:09.226 "zone_append": false, 00:04:09.226 "compare": false, 00:04:09.226 "compare_and_write": false, 00:04:09.226 "abort": true, 00:04:09.226 "seek_hole": false, 00:04:09.226 "seek_data": false, 00:04:09.226 "copy": true, 00:04:09.226 "nvme_iov_md": false 00:04:09.226 }, 00:04:09.226 "memory_domains": [ 00:04:09.226 { 00:04:09.226 "dma_device_id": "system", 00:04:09.226 "dma_device_type": 1 00:04:09.226 }, 00:04:09.226 { 00:04:09.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.226 "dma_device_type": 2 00:04:09.226 } 00:04:09.226 ], 00:04:09.226 "driver_specific": {} 00:04:09.226 }, 00:04:09.226 { 00:04:09.226 "name": "Passthru0", 00:04:09.226 "aliases": [ 00:04:09.226 "26e1cf27-dc60-5cc5-ba51-2dbd72c3a3b4" 00:04:09.226 ], 00:04:09.226 "product_name": "passthru", 00:04:09.226 "block_size": 512, 00:04:09.226 "num_blocks": 16384, 00:04:09.226 "uuid": "26e1cf27-dc60-5cc5-ba51-2dbd72c3a3b4", 00:04:09.226 "assigned_rate_limits": { 00:04:09.226 "rw_ios_per_sec": 0, 00:04:09.226 "rw_mbytes_per_sec": 0, 00:04:09.226 "r_mbytes_per_sec": 0, 00:04:09.226 "w_mbytes_per_sec": 0 00:04:09.226 }, 00:04:09.226 "claimed": false, 00:04:09.226 "zoned": false, 00:04:09.226 "supported_io_types": { 00:04:09.226 "read": true, 00:04:09.226 "write": true, 00:04:09.226 "unmap": true, 00:04:09.226 "flush": true, 00:04:09.226 "reset": true, 00:04:09.226 "nvme_admin": false, 00:04:09.226 "nvme_io": false, 00:04:09.226 "nvme_io_md": false, 00:04:09.226 "write_zeroes": true, 00:04:09.226 "zcopy": true, 00:04:09.226 "get_zone_info": false, 00:04:09.226 "zone_management": false, 00:04:09.226 "zone_append": false, 00:04:09.226 "compare": false, 00:04:09.226 "compare_and_write": false, 00:04:09.226 "abort": true, 00:04:09.226 "seek_hole": false, 00:04:09.226 "seek_data": false, 00:04:09.226 "copy": true, 00:04:09.226 "nvme_iov_md": false 00:04:09.226 }, 00:04:09.226 "memory_domains": [ 00:04:09.226 { 00:04:09.226 "dma_device_id": "system", 00:04:09.226 "dma_device_type": 1 00:04:09.226 }, 00:04:09.226 { 00:04:09.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.226 "dma_device_type": 2 00:04:09.226 } 00:04:09.226 ], 00:04:09.226 "driver_specific": { 00:04:09.226 "passthru": { 00:04:09.226 "name": "Passthru0", 00:04:09.226 "base_bdev_name": "Malloc0" 00:04:09.226 } 00:04:09.226 } 00:04:09.226 } 00:04:09.226 ]' 00:04:09.226 03:06:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:09.226 03:06:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:09.226 03:06:52 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:09.226 03:06:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.226 03:06:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.226 03:06:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.226 03:06:52 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:09.226 03:06:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.226 03:06:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.226 03:06:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.226 03:06:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:09.226 03:06:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.226 03:06:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.226 03:06:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.226 03:06:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:09.226 03:06:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:09.226 03:06:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:09.226 00:04:09.226 real 0m0.323s 00:04:09.226 user 0m0.214s 00:04:09.226 sys 0m0.037s 00:04:09.226 03:06:52 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:09.226 03:06:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.226 ************************************ 00:04:09.226 END TEST rpc_integrity 00:04:09.226 ************************************ 00:04:09.226 03:06:52 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:09.226 03:06:52 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:09.226 03:06:52 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:09.226 03:06:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.226 ************************************ 00:04:09.226 START TEST rpc_plugins 00:04:09.226 ************************************ 00:04:09.226 03:06:52 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:09.226 03:06:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:09.226 03:06:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.226 03:06:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.226 03:06:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.226 03:06:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:09.226 03:06:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:09.226 03:06:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.226 03:06:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.486 03:06:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.486 03:06:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:09.486 { 00:04:09.486 "name": "Malloc1", 00:04:09.486 "aliases": [ 00:04:09.486 "7bbd2308-8284-43b9-8327-728eebdce2c6" 00:04:09.486 ], 00:04:09.486 "product_name": "Malloc disk", 00:04:09.486 "block_size": 4096, 00:04:09.486 "num_blocks": 256, 00:04:09.486 "uuid": "7bbd2308-8284-43b9-8327-728eebdce2c6", 00:04:09.486 "assigned_rate_limits": { 00:04:09.486 "rw_ios_per_sec": 0, 00:04:09.486 "rw_mbytes_per_sec": 0, 00:04:09.486 "r_mbytes_per_sec": 0, 00:04:09.486 "w_mbytes_per_sec": 0 00:04:09.486 }, 00:04:09.486 "claimed": false, 00:04:09.486 "zoned": false, 00:04:09.486 "supported_io_types": { 00:04:09.486 "read": true, 00:04:09.486 "write": true, 00:04:09.486 "unmap": true, 00:04:09.486 "flush": true, 00:04:09.486 "reset": true, 00:04:09.486 "nvme_admin": false, 00:04:09.486 "nvme_io": false, 00:04:09.486 "nvme_io_md": false, 00:04:09.486 "write_zeroes": true, 00:04:09.486 "zcopy": true, 00:04:09.486 "get_zone_info": false, 00:04:09.486 "zone_management": false, 00:04:09.486 "zone_append": false, 00:04:09.486 "compare": false, 00:04:09.486 "compare_and_write": false, 00:04:09.486 "abort": true, 00:04:09.486 "seek_hole": false, 00:04:09.486 "seek_data": false, 00:04:09.486 "copy": true, 00:04:09.486 "nvme_iov_md": false 00:04:09.486 }, 00:04:09.486 "memory_domains": [ 00:04:09.486 { 00:04:09.486 "dma_device_id": "system", 00:04:09.486 "dma_device_type": 1 00:04:09.486 }, 00:04:09.486 { 00:04:09.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.486 "dma_device_type": 2 00:04:09.486 } 00:04:09.486 ], 00:04:09.486 "driver_specific": {} 00:04:09.486 } 00:04:09.486 ]' 00:04:09.486 03:06:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:09.486 03:06:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:09.486 03:06:52 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:09.486 03:06:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.486 03:06:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.486 03:06:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.486 03:06:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:09.486 03:06:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.486 03:06:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.486 03:06:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.486 03:06:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:09.486 03:06:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:09.486 03:06:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:09.486 00:04:09.486 real 0m0.158s 00:04:09.486 user 0m0.106s 00:04:09.486 sys 0m0.018s 00:04:09.486 03:06:52 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:09.486 ************************************ 00:04:09.486 END TEST rpc_plugins 00:04:09.486 03:06:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.486 ************************************ 00:04:09.486 03:06:52 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:09.486 03:06:52 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:09.486 03:06:52 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:09.486 03:06:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.486 ************************************ 00:04:09.486 START TEST rpc_trace_cmd_test 00:04:09.486 ************************************ 00:04:09.486 03:06:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:09.486 03:06:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:09.486 03:06:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:09.486 03:06:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.486 03:06:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:09.486 03:06:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.486 03:06:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:09.486 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56683", 00:04:09.486 "tpoint_group_mask": "0x8", 00:04:09.486 "iscsi_conn": { 00:04:09.486 "mask": "0x2", 00:04:09.486 "tpoint_mask": "0x0" 00:04:09.486 }, 00:04:09.486 "scsi": { 00:04:09.486 "mask": "0x4", 00:04:09.486 "tpoint_mask": "0x0" 00:04:09.486 }, 00:04:09.486 "bdev": { 00:04:09.486 "mask": "0x8", 00:04:09.486 "tpoint_mask": "0xffffffffffffffff" 00:04:09.486 }, 00:04:09.486 "nvmf_rdma": { 00:04:09.486 "mask": "0x10", 00:04:09.486 "tpoint_mask": "0x0" 00:04:09.486 }, 00:04:09.486 "nvmf_tcp": { 00:04:09.486 "mask": "0x20", 00:04:09.486 "tpoint_mask": "0x0" 00:04:09.486 }, 00:04:09.486 "ftl": { 00:04:09.486 "mask": "0x40", 00:04:09.486 "tpoint_mask": "0x0" 00:04:09.486 }, 00:04:09.486 "blobfs": { 00:04:09.486 "mask": "0x80", 00:04:09.486 "tpoint_mask": "0x0" 00:04:09.486 }, 00:04:09.486 "dsa": { 00:04:09.486 "mask": "0x200", 00:04:09.486 "tpoint_mask": "0x0" 00:04:09.486 }, 00:04:09.486 "thread": { 00:04:09.486 "mask": "0x400", 00:04:09.486 "tpoint_mask": "0x0" 00:04:09.486 }, 00:04:09.486 "nvme_pcie": { 00:04:09.486 "mask": "0x800", 00:04:09.486 "tpoint_mask": "0x0" 00:04:09.486 }, 00:04:09.486 "iaa": { 00:04:09.486 "mask": "0x1000", 00:04:09.486 "tpoint_mask": "0x0" 00:04:09.486 }, 00:04:09.486 "nvme_tcp": { 00:04:09.486 "mask": "0x2000", 00:04:09.486 "tpoint_mask": "0x0" 00:04:09.486 }, 00:04:09.486 "bdev_nvme": { 00:04:09.486 "mask": "0x4000", 00:04:09.487 "tpoint_mask": "0x0" 00:04:09.487 }, 00:04:09.487 "sock": { 00:04:09.487 "mask": "0x8000", 00:04:09.487 "tpoint_mask": "0x0" 00:04:09.487 }, 00:04:09.487 "blob": { 00:04:09.487 "mask": "0x10000", 00:04:09.487 "tpoint_mask": "0x0" 00:04:09.487 }, 00:04:09.487 "bdev_raid": { 00:04:09.487 "mask": "0x20000", 00:04:09.487 "tpoint_mask": "0x0" 00:04:09.487 }, 00:04:09.487 "scheduler": { 00:04:09.487 "mask": "0x40000", 00:04:09.487 "tpoint_mask": "0x0" 00:04:09.487 } 00:04:09.487 }' 00:04:09.487 03:06:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:09.487 03:06:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:09.746 03:06:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:09.746 03:06:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:09.746 03:06:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:09.746 03:06:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:09.746 03:06:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:09.746 03:06:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:09.746 03:06:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:09.746 03:06:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:09.746 00:04:09.746 real 0m0.278s 00:04:09.746 user 0m0.233s 00:04:09.746 sys 0m0.034s 00:04:09.746 03:06:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:09.746 03:06:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:09.746 ************************************ 00:04:09.746 END TEST rpc_trace_cmd_test 00:04:09.746 ************************************ 00:04:09.746 03:06:53 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:09.746 03:06:53 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:09.746 03:06:53 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:09.746 03:06:53 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:09.746 03:06:53 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:09.746 03:06:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.746 ************************************ 00:04:09.746 START TEST rpc_daemon_integrity 00:04:09.746 ************************************ 00:04:09.746 03:06:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:09.746 03:06:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:09.746 03:06:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.746 03:06:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.005 03:06:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.005 03:06:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:10.005 03:06:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:10.005 03:06:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:10.005 03:06:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:10.005 03:06:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.005 03:06:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.005 03:06:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.005 03:06:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:10.005 03:06:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:10.005 03:06:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.005 03:06:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.005 03:06:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.005 03:06:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:10.005 { 00:04:10.005 "name": "Malloc2", 00:04:10.005 "aliases": [ 00:04:10.005 "ad3f8810-809c-423c-8d1e-162f5f924fba" 00:04:10.005 ], 00:04:10.005 "product_name": "Malloc disk", 00:04:10.005 "block_size": 512, 00:04:10.005 "num_blocks": 16384, 00:04:10.005 "uuid": "ad3f8810-809c-423c-8d1e-162f5f924fba", 00:04:10.005 "assigned_rate_limits": { 00:04:10.005 "rw_ios_per_sec": 0, 00:04:10.005 "rw_mbytes_per_sec": 0, 00:04:10.005 "r_mbytes_per_sec": 0, 00:04:10.005 "w_mbytes_per_sec": 0 00:04:10.005 }, 00:04:10.005 "claimed": false, 00:04:10.005 "zoned": false, 00:04:10.005 "supported_io_types": { 00:04:10.005 "read": true, 00:04:10.005 "write": true, 00:04:10.005 "unmap": true, 00:04:10.005 "flush": true, 00:04:10.005 "reset": true, 00:04:10.005 "nvme_admin": false, 00:04:10.005 "nvme_io": false, 00:04:10.005 "nvme_io_md": false, 00:04:10.005 "write_zeroes": true, 00:04:10.005 "zcopy": true, 00:04:10.005 "get_zone_info": false, 00:04:10.005 "zone_management": false, 00:04:10.005 "zone_append": false, 00:04:10.005 "compare": false, 00:04:10.005 "compare_and_write": false, 00:04:10.005 "abort": true, 00:04:10.005 "seek_hole": false, 00:04:10.005 "seek_data": false, 00:04:10.005 "copy": true, 00:04:10.005 "nvme_iov_md": false 00:04:10.005 }, 00:04:10.005 "memory_domains": [ 00:04:10.005 { 00:04:10.005 "dma_device_id": "system", 00:04:10.005 "dma_device_type": 1 00:04:10.005 }, 00:04:10.005 { 00:04:10.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:10.005 "dma_device_type": 2 00:04:10.005 } 00:04:10.005 ], 00:04:10.005 "driver_specific": {} 00:04:10.005 } 00:04:10.005 ]' 00:04:10.005 03:06:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:10.005 03:06:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:10.005 03:06:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:10.005 03:06:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.005 03:06:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.005 [2024-10-09 03:06:53.197919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:10.005 [2024-10-09 03:06:53.197969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:10.005 [2024-10-09 03:06:53.197987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1468a90 00:04:10.005 [2024-10-09 03:06:53.197996] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:10.005 [2024-10-09 03:06:53.199851] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:10.005 [2024-10-09 03:06:53.199902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:10.005 Passthru0 00:04:10.005 03:06:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.005 03:06:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:10.005 03:06:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.005 03:06:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.005 03:06:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.005 03:06:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:10.005 { 00:04:10.005 "name": "Malloc2", 00:04:10.005 "aliases": [ 00:04:10.005 "ad3f8810-809c-423c-8d1e-162f5f924fba" 00:04:10.005 ], 00:04:10.005 "product_name": "Malloc disk", 00:04:10.005 "block_size": 512, 00:04:10.005 "num_blocks": 16384, 00:04:10.005 "uuid": "ad3f8810-809c-423c-8d1e-162f5f924fba", 00:04:10.005 "assigned_rate_limits": { 00:04:10.005 "rw_ios_per_sec": 0, 00:04:10.005 "rw_mbytes_per_sec": 0, 00:04:10.005 "r_mbytes_per_sec": 0, 00:04:10.005 "w_mbytes_per_sec": 0 00:04:10.005 }, 00:04:10.005 "claimed": true, 00:04:10.005 "claim_type": "exclusive_write", 00:04:10.005 "zoned": false, 00:04:10.005 "supported_io_types": { 00:04:10.005 "read": true, 00:04:10.005 "write": true, 00:04:10.005 "unmap": true, 00:04:10.005 "flush": true, 00:04:10.006 "reset": true, 00:04:10.006 "nvme_admin": false, 00:04:10.006 "nvme_io": false, 00:04:10.006 "nvme_io_md": false, 00:04:10.006 "write_zeroes": true, 00:04:10.006 "zcopy": true, 00:04:10.006 "get_zone_info": false, 00:04:10.006 "zone_management": false, 00:04:10.006 "zone_append": false, 00:04:10.006 "compare": false, 00:04:10.006 "compare_and_write": false, 00:04:10.006 "abort": true, 00:04:10.006 "seek_hole": false, 00:04:10.006 "seek_data": false, 00:04:10.006 "copy": true, 00:04:10.006 "nvme_iov_md": false 00:04:10.006 }, 00:04:10.006 "memory_domains": [ 00:04:10.006 { 00:04:10.006 "dma_device_id": "system", 00:04:10.006 "dma_device_type": 1 00:04:10.006 }, 00:04:10.006 { 00:04:10.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:10.006 "dma_device_type": 2 00:04:10.006 } 00:04:10.006 ], 00:04:10.006 "driver_specific": {} 00:04:10.006 }, 00:04:10.006 { 00:04:10.006 "name": "Passthru0", 00:04:10.006 "aliases": [ 00:04:10.006 "2dfa74f9-f5d8-528c-baca-af5f75912b1b" 00:04:10.006 ], 00:04:10.006 "product_name": "passthru", 00:04:10.006 "block_size": 512, 00:04:10.006 "num_blocks": 16384, 00:04:10.006 "uuid": "2dfa74f9-f5d8-528c-baca-af5f75912b1b", 00:04:10.006 "assigned_rate_limits": { 00:04:10.006 "rw_ios_per_sec": 0, 00:04:10.006 "rw_mbytes_per_sec": 0, 00:04:10.006 "r_mbytes_per_sec": 0, 00:04:10.006 "w_mbytes_per_sec": 0 00:04:10.006 }, 00:04:10.006 "claimed": false, 00:04:10.006 "zoned": false, 00:04:10.006 "supported_io_types": { 00:04:10.006 "read": true, 00:04:10.006 "write": true, 00:04:10.006 "unmap": true, 00:04:10.006 "flush": true, 00:04:10.006 "reset": true, 00:04:10.006 "nvme_admin": false, 00:04:10.006 "nvme_io": false, 00:04:10.006 "nvme_io_md": false, 00:04:10.006 "write_zeroes": true, 00:04:10.006 "zcopy": true, 00:04:10.006 "get_zone_info": false, 00:04:10.006 "zone_management": false, 00:04:10.006 "zone_append": false, 00:04:10.006 "compare": false, 00:04:10.006 "compare_and_write": false, 00:04:10.006 "abort": true, 00:04:10.006 "seek_hole": false, 00:04:10.006 "seek_data": false, 00:04:10.006 "copy": true, 00:04:10.006 "nvme_iov_md": false 00:04:10.006 }, 00:04:10.006 "memory_domains": [ 00:04:10.006 { 00:04:10.006 "dma_device_id": "system", 00:04:10.006 "dma_device_type": 1 00:04:10.006 }, 00:04:10.006 { 00:04:10.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:10.006 "dma_device_type": 2 00:04:10.006 } 00:04:10.006 ], 00:04:10.006 "driver_specific": { 00:04:10.006 "passthru": { 00:04:10.006 "name": "Passthru0", 00:04:10.006 "base_bdev_name": "Malloc2" 00:04:10.006 } 00:04:10.006 } 00:04:10.006 } 00:04:10.006 ]' 00:04:10.006 03:06:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:10.006 03:06:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:10.006 03:06:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:10.006 03:06:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.006 03:06:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.006 03:06:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.006 03:06:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:10.006 03:06:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.006 03:06:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.006 03:06:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.006 03:06:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:10.006 03:06:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.006 03:06:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.264 03:06:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.264 03:06:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:10.264 03:06:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:10.264 03:06:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:10.264 00:04:10.264 real 0m0.324s 00:04:10.264 user 0m0.224s 00:04:10.264 sys 0m0.037s 00:04:10.264 03:06:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:10.264 03:06:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.264 ************************************ 00:04:10.264 END TEST rpc_daemon_integrity 00:04:10.264 ************************************ 00:04:10.264 03:06:53 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:10.264 03:06:53 rpc -- rpc/rpc.sh@84 -- # killprocess 56683 00:04:10.264 03:06:53 rpc -- common/autotest_common.sh@950 -- # '[' -z 56683 ']' 00:04:10.264 03:06:53 rpc -- common/autotest_common.sh@954 -- # kill -0 56683 00:04:10.264 03:06:53 rpc -- common/autotest_common.sh@955 -- # uname 00:04:10.264 03:06:53 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:10.264 03:06:53 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56683 00:04:10.264 03:06:53 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:10.264 03:06:53 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:10.264 killing process with pid 56683 00:04:10.264 03:06:53 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56683' 00:04:10.264 03:06:53 rpc -- common/autotest_common.sh@969 -- # kill 56683 00:04:10.264 03:06:53 rpc -- common/autotest_common.sh@974 -- # wait 56683 00:04:10.831 00:04:10.831 real 0m2.513s 00:04:10.831 user 0m3.157s 00:04:10.831 sys 0m0.703s 00:04:10.831 03:06:53 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:10.831 03:06:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.831 ************************************ 00:04:10.831 END TEST rpc 00:04:10.831 ************************************ 00:04:10.831 03:06:53 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:10.831 03:06:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:10.831 03:06:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:10.831 03:06:53 -- common/autotest_common.sh@10 -- # set +x 00:04:10.831 ************************************ 00:04:10.831 START TEST skip_rpc 00:04:10.831 ************************************ 00:04:10.831 03:06:53 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:10.831 * Looking for test storage... 00:04:10.831 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:10.831 03:06:54 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:10.831 03:06:54 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:10.831 03:06:54 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:10.831 03:06:54 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:10.831 03:06:54 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:10.831 03:06:54 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:10.831 03:06:54 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:10.831 03:06:54 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:10.831 03:06:54 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:10.831 03:06:54 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:10.831 03:06:54 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:10.831 03:06:54 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:10.831 03:06:54 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:10.831 03:06:54 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:10.831 03:06:54 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:10.831 03:06:54 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:10.831 03:06:54 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:10.831 03:06:54 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:10.831 03:06:54 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:10.831 03:06:54 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:10.831 03:06:54 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:10.831 03:06:54 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:10.831 03:06:54 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:10.831 03:06:54 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:10.831 03:06:54 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:10.831 03:06:54 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:10.832 03:06:54 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:10.832 03:06:54 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:10.832 03:06:54 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:10.832 03:06:54 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:10.832 03:06:54 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:10.832 03:06:54 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:10.832 03:06:54 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:10.832 03:06:54 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:10.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.832 --rc genhtml_branch_coverage=1 00:04:10.832 --rc genhtml_function_coverage=1 00:04:10.832 --rc genhtml_legend=1 00:04:10.832 --rc geninfo_all_blocks=1 00:04:10.832 --rc geninfo_unexecuted_blocks=1 00:04:10.832 00:04:10.832 ' 00:04:10.832 03:06:54 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:10.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.832 --rc genhtml_branch_coverage=1 00:04:10.832 --rc genhtml_function_coverage=1 00:04:10.832 --rc genhtml_legend=1 00:04:10.832 --rc geninfo_all_blocks=1 00:04:10.832 --rc geninfo_unexecuted_blocks=1 00:04:10.832 00:04:10.832 ' 00:04:10.832 03:06:54 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:10.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.832 --rc genhtml_branch_coverage=1 00:04:10.832 --rc genhtml_function_coverage=1 00:04:10.832 --rc genhtml_legend=1 00:04:10.832 --rc geninfo_all_blocks=1 00:04:10.832 --rc geninfo_unexecuted_blocks=1 00:04:10.832 00:04:10.832 ' 00:04:10.832 03:06:54 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:10.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.832 --rc genhtml_branch_coverage=1 00:04:10.832 --rc genhtml_function_coverage=1 00:04:10.832 --rc genhtml_legend=1 00:04:10.832 --rc geninfo_all_blocks=1 00:04:10.832 --rc geninfo_unexecuted_blocks=1 00:04:10.832 00:04:10.832 ' 00:04:10.832 03:06:54 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:10.832 03:06:54 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:10.832 03:06:54 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:10.832 03:06:54 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:10.832 03:06:54 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:10.832 03:06:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.832 ************************************ 00:04:10.832 START TEST skip_rpc 00:04:10.832 ************************************ 00:04:10.832 03:06:54 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:10.832 03:06:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56876 00:04:10.832 03:06:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.832 03:06:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:10.832 03:06:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:11.091 [2024-10-09 03:06:54.191174] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:04:11.091 [2024-10-09 03:06:54.191296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56876 ] 00:04:11.091 [2024-10-09 03:06:54.330466] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.350 [2024-10-09 03:06:54.432057] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.350 [2024-10-09 03:06:54.505228] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:16.670 03:06:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:16.670 03:06:59 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:16.670 03:06:59 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:16.670 03:06:59 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:16.670 03:06:59 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:16.670 03:06:59 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:16.670 03:06:59 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:16.670 03:06:59 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:16.670 03:06:59 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.670 03:06:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.670 03:06:59 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:16.670 03:06:59 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:16.670 03:06:59 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:16.670 03:06:59 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:16.670 03:06:59 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:16.670 03:06:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:16.670 03:06:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56876 00:04:16.670 03:06:59 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 56876 ']' 00:04:16.670 03:06:59 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 56876 00:04:16.670 03:06:59 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:16.670 03:06:59 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:16.670 03:06:59 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56876 00:04:16.670 03:06:59 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:16.670 03:06:59 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:16.670 killing process with pid 56876 00:04:16.670 03:06:59 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56876' 00:04:16.670 03:06:59 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 56876 00:04:16.670 03:06:59 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 56876 00:04:16.670 00:04:16.670 real 0m5.462s 00:04:16.670 user 0m5.066s 00:04:16.670 sys 0m0.311s 00:04:16.670 03:06:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:16.670 03:06:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.670 ************************************ 00:04:16.670 END TEST skip_rpc 00:04:16.670 ************************************ 00:04:16.670 03:06:59 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:16.670 03:06:59 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:16.670 03:06:59 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:16.670 03:06:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.671 ************************************ 00:04:16.671 START TEST skip_rpc_with_json 00:04:16.671 ************************************ 00:04:16.671 03:06:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:16.671 03:06:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:16.671 03:06:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56963 00:04:16.671 03:06:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:16.671 03:06:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:16.671 03:06:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56963 00:04:16.671 03:06:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 56963 ']' 00:04:16.671 03:06:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:16.671 03:06:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:16.671 03:06:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:16.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:16.671 03:06:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:16.671 03:06:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.671 [2024-10-09 03:06:59.695684] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:04:16.671 [2024-10-09 03:06:59.695795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56963 ] 00:04:16.671 [2024-10-09 03:06:59.824285] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.671 [2024-10-09 03:06:59.905287] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.929 [2024-10-09 03:06:59.977111] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:16.929 03:07:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:16.929 03:07:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:16.929 03:07:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:16.929 03:07:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.929 03:07:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.929 [2024-10-09 03:07:00.167256] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:16.929 request: 00:04:16.929 { 00:04:16.929 "trtype": "tcp", 00:04:16.929 "method": "nvmf_get_transports", 00:04:16.929 "req_id": 1 00:04:16.929 } 00:04:16.929 Got JSON-RPC error response 00:04:16.929 response: 00:04:16.929 { 00:04:16.929 "code": -19, 00:04:16.929 "message": "No such device" 00:04:16.929 } 00:04:16.929 03:07:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:16.929 03:07:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:16.929 03:07:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.929 03:07:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.929 [2024-10-09 03:07:00.179337] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:16.929 03:07:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.929 03:07:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:16.929 03:07:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.929 03:07:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:17.188 03:07:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.188 03:07:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:17.188 { 00:04:17.188 "subsystems": [ 00:04:17.188 { 00:04:17.188 "subsystem": "fsdev", 00:04:17.188 "config": [ 00:04:17.188 { 00:04:17.188 "method": "fsdev_set_opts", 00:04:17.188 "params": { 00:04:17.188 "fsdev_io_pool_size": 65535, 00:04:17.188 "fsdev_io_cache_size": 256 00:04:17.188 } 00:04:17.188 } 00:04:17.188 ] 00:04:17.188 }, 00:04:17.188 { 00:04:17.188 "subsystem": "keyring", 00:04:17.188 "config": [] 00:04:17.188 }, 00:04:17.188 { 00:04:17.188 "subsystem": "iobuf", 00:04:17.188 "config": [ 00:04:17.188 { 00:04:17.188 "method": "iobuf_set_options", 00:04:17.188 "params": { 00:04:17.188 "small_pool_count": 8192, 00:04:17.188 "large_pool_count": 1024, 00:04:17.188 "small_bufsize": 8192, 00:04:17.188 "large_bufsize": 135168 00:04:17.188 } 00:04:17.188 } 00:04:17.188 ] 00:04:17.188 }, 00:04:17.188 { 00:04:17.188 "subsystem": "sock", 00:04:17.188 "config": [ 00:04:17.188 { 00:04:17.188 "method": "sock_set_default_impl", 00:04:17.188 "params": { 00:04:17.188 "impl_name": "uring" 00:04:17.188 } 00:04:17.188 }, 00:04:17.188 { 00:04:17.188 "method": "sock_impl_set_options", 00:04:17.188 "params": { 00:04:17.188 "impl_name": "ssl", 00:04:17.188 "recv_buf_size": 4096, 00:04:17.188 "send_buf_size": 4096, 00:04:17.188 "enable_recv_pipe": true, 00:04:17.188 "enable_quickack": false, 00:04:17.188 "enable_placement_id": 0, 00:04:17.188 "enable_zerocopy_send_server": true, 00:04:17.188 "enable_zerocopy_send_client": false, 00:04:17.188 "zerocopy_threshold": 0, 00:04:17.188 "tls_version": 0, 00:04:17.188 "enable_ktls": false 00:04:17.188 } 00:04:17.188 }, 00:04:17.188 { 00:04:17.188 "method": "sock_impl_set_options", 00:04:17.188 "params": { 00:04:17.188 "impl_name": "posix", 00:04:17.188 "recv_buf_size": 2097152, 00:04:17.188 "send_buf_size": 2097152, 00:04:17.188 "enable_recv_pipe": true, 00:04:17.188 "enable_quickack": false, 00:04:17.188 "enable_placement_id": 0, 00:04:17.188 "enable_zerocopy_send_server": true, 00:04:17.188 "enable_zerocopy_send_client": false, 00:04:17.188 "zerocopy_threshold": 0, 00:04:17.188 "tls_version": 0, 00:04:17.188 "enable_ktls": false 00:04:17.188 } 00:04:17.188 }, 00:04:17.188 { 00:04:17.188 "method": "sock_impl_set_options", 00:04:17.188 "params": { 00:04:17.188 "impl_name": "uring", 00:04:17.188 "recv_buf_size": 2097152, 00:04:17.188 "send_buf_size": 2097152, 00:04:17.188 "enable_recv_pipe": true, 00:04:17.188 "enable_quickack": false, 00:04:17.188 "enable_placement_id": 0, 00:04:17.188 "enable_zerocopy_send_server": false, 00:04:17.188 "enable_zerocopy_send_client": false, 00:04:17.188 "zerocopy_threshold": 0, 00:04:17.188 "tls_version": 0, 00:04:17.188 "enable_ktls": false 00:04:17.188 } 00:04:17.188 } 00:04:17.188 ] 00:04:17.188 }, 00:04:17.188 { 00:04:17.188 "subsystem": "vmd", 00:04:17.188 "config": [] 00:04:17.188 }, 00:04:17.188 { 00:04:17.188 "subsystem": "accel", 00:04:17.188 "config": [ 00:04:17.188 { 00:04:17.188 "method": "accel_set_options", 00:04:17.188 "params": { 00:04:17.188 "small_cache_size": 128, 00:04:17.188 "large_cache_size": 16, 00:04:17.188 "task_count": 2048, 00:04:17.188 "sequence_count": 2048, 00:04:17.188 "buf_count": 2048 00:04:17.188 } 00:04:17.188 } 00:04:17.188 ] 00:04:17.188 }, 00:04:17.188 { 00:04:17.188 "subsystem": "bdev", 00:04:17.188 "config": [ 00:04:17.188 { 00:04:17.188 "method": "bdev_set_options", 00:04:17.188 "params": { 00:04:17.188 "bdev_io_pool_size": 65535, 00:04:17.188 "bdev_io_cache_size": 256, 00:04:17.188 "bdev_auto_examine": true, 00:04:17.188 "iobuf_small_cache_size": 128, 00:04:17.188 "iobuf_large_cache_size": 16 00:04:17.188 } 00:04:17.188 }, 00:04:17.188 { 00:04:17.188 "method": "bdev_raid_set_options", 00:04:17.188 "params": { 00:04:17.188 "process_window_size_kb": 1024, 00:04:17.188 "process_max_bandwidth_mb_sec": 0 00:04:17.188 } 00:04:17.188 }, 00:04:17.188 { 00:04:17.189 "method": "bdev_iscsi_set_options", 00:04:17.189 "params": { 00:04:17.189 "timeout_sec": 30 00:04:17.189 } 00:04:17.189 }, 00:04:17.189 { 00:04:17.189 "method": "bdev_nvme_set_options", 00:04:17.189 "params": { 00:04:17.189 "action_on_timeout": "none", 00:04:17.189 "timeout_us": 0, 00:04:17.189 "timeout_admin_us": 0, 00:04:17.189 "keep_alive_timeout_ms": 10000, 00:04:17.189 "arbitration_burst": 0, 00:04:17.189 "low_priority_weight": 0, 00:04:17.189 "medium_priority_weight": 0, 00:04:17.189 "high_priority_weight": 0, 00:04:17.189 "nvme_adminq_poll_period_us": 10000, 00:04:17.189 "nvme_ioq_poll_period_us": 0, 00:04:17.189 "io_queue_requests": 0, 00:04:17.189 "delay_cmd_submit": true, 00:04:17.189 "transport_retry_count": 4, 00:04:17.189 "bdev_retry_count": 3, 00:04:17.189 "transport_ack_timeout": 0, 00:04:17.189 "ctrlr_loss_timeout_sec": 0, 00:04:17.189 "reconnect_delay_sec": 0, 00:04:17.189 "fast_io_fail_timeout_sec": 0, 00:04:17.189 "disable_auto_failback": false, 00:04:17.189 "generate_uuids": false, 00:04:17.189 "transport_tos": 0, 00:04:17.189 "nvme_error_stat": false, 00:04:17.189 "rdma_srq_size": 0, 00:04:17.189 "io_path_stat": false, 00:04:17.189 "allow_accel_sequence": false, 00:04:17.189 "rdma_max_cq_size": 0, 00:04:17.189 "rdma_cm_event_timeout_ms": 0, 00:04:17.189 "dhchap_digests": [ 00:04:17.189 "sha256", 00:04:17.189 "sha384", 00:04:17.189 "sha512" 00:04:17.189 ], 00:04:17.189 "dhchap_dhgroups": [ 00:04:17.189 "null", 00:04:17.189 "ffdhe2048", 00:04:17.189 "ffdhe3072", 00:04:17.189 "ffdhe4096", 00:04:17.189 "ffdhe6144", 00:04:17.189 "ffdhe8192" 00:04:17.189 ] 00:04:17.189 } 00:04:17.189 }, 00:04:17.189 { 00:04:17.189 "method": "bdev_nvme_set_hotplug", 00:04:17.189 "params": { 00:04:17.189 "period_us": 100000, 00:04:17.189 "enable": false 00:04:17.189 } 00:04:17.189 }, 00:04:17.189 { 00:04:17.189 "method": "bdev_wait_for_examine" 00:04:17.189 } 00:04:17.189 ] 00:04:17.189 }, 00:04:17.189 { 00:04:17.189 "subsystem": "scsi", 00:04:17.189 "config": null 00:04:17.189 }, 00:04:17.189 { 00:04:17.189 "subsystem": "scheduler", 00:04:17.189 "config": [ 00:04:17.189 { 00:04:17.189 "method": "framework_set_scheduler", 00:04:17.189 "params": { 00:04:17.189 "name": "static" 00:04:17.189 } 00:04:17.189 } 00:04:17.189 ] 00:04:17.189 }, 00:04:17.189 { 00:04:17.189 "subsystem": "vhost_scsi", 00:04:17.189 "config": [] 00:04:17.189 }, 00:04:17.189 { 00:04:17.189 "subsystem": "vhost_blk", 00:04:17.189 "config": [] 00:04:17.189 }, 00:04:17.189 { 00:04:17.189 "subsystem": "ublk", 00:04:17.189 "config": [] 00:04:17.189 }, 00:04:17.189 { 00:04:17.189 "subsystem": "nbd", 00:04:17.189 "config": [] 00:04:17.189 }, 00:04:17.189 { 00:04:17.189 "subsystem": "nvmf", 00:04:17.189 "config": [ 00:04:17.189 { 00:04:17.189 "method": "nvmf_set_config", 00:04:17.189 "params": { 00:04:17.189 "discovery_filter": "match_any", 00:04:17.189 "admin_cmd_passthru": { 00:04:17.189 "identify_ctrlr": false 00:04:17.189 }, 00:04:17.189 "dhchap_digests": [ 00:04:17.189 "sha256", 00:04:17.189 "sha384", 00:04:17.189 "sha512" 00:04:17.189 ], 00:04:17.189 "dhchap_dhgroups": [ 00:04:17.189 "null", 00:04:17.189 "ffdhe2048", 00:04:17.189 "ffdhe3072", 00:04:17.189 "ffdhe4096", 00:04:17.189 "ffdhe6144", 00:04:17.189 "ffdhe8192" 00:04:17.189 ] 00:04:17.189 } 00:04:17.189 }, 00:04:17.189 { 00:04:17.189 "method": "nvmf_set_max_subsystems", 00:04:17.189 "params": { 00:04:17.189 "max_subsystems": 1024 00:04:17.189 } 00:04:17.189 }, 00:04:17.189 { 00:04:17.189 "method": "nvmf_set_crdt", 00:04:17.189 "params": { 00:04:17.189 "crdt1": 0, 00:04:17.189 "crdt2": 0, 00:04:17.189 "crdt3": 0 00:04:17.189 } 00:04:17.189 }, 00:04:17.189 { 00:04:17.189 "method": "nvmf_create_transport", 00:04:17.189 "params": { 00:04:17.189 "trtype": "TCP", 00:04:17.189 "max_queue_depth": 128, 00:04:17.189 "max_io_qpairs_per_ctrlr": 127, 00:04:17.189 "in_capsule_data_size": 4096, 00:04:17.189 "max_io_size": 131072, 00:04:17.189 "io_unit_size": 131072, 00:04:17.189 "max_aq_depth": 128, 00:04:17.189 "num_shared_buffers": 511, 00:04:17.189 "buf_cache_size": 4294967295, 00:04:17.189 "dif_insert_or_strip": false, 00:04:17.189 "zcopy": false, 00:04:17.189 "c2h_success": true, 00:04:17.189 "sock_priority": 0, 00:04:17.189 "abort_timeout_sec": 1, 00:04:17.189 "ack_timeout": 0, 00:04:17.189 "data_wr_pool_size": 0 00:04:17.189 } 00:04:17.189 } 00:04:17.189 ] 00:04:17.189 }, 00:04:17.189 { 00:04:17.189 "subsystem": "iscsi", 00:04:17.189 "config": [ 00:04:17.189 { 00:04:17.189 "method": "iscsi_set_options", 00:04:17.189 "params": { 00:04:17.189 "node_base": "iqn.2016-06.io.spdk", 00:04:17.189 "max_sessions": 128, 00:04:17.189 "max_connections_per_session": 2, 00:04:17.189 "max_queue_depth": 64, 00:04:17.189 "default_time2wait": 2, 00:04:17.189 "default_time2retain": 20, 00:04:17.189 "first_burst_length": 8192, 00:04:17.189 "immediate_data": true, 00:04:17.189 "allow_duplicated_isid": false, 00:04:17.189 "error_recovery_level": 0, 00:04:17.189 "nop_timeout": 60, 00:04:17.189 "nop_in_interval": 30, 00:04:17.189 "disable_chap": false, 00:04:17.189 "require_chap": false, 00:04:17.189 "mutual_chap": false, 00:04:17.189 "chap_group": 0, 00:04:17.189 "max_large_datain_per_connection": 64, 00:04:17.189 "max_r2t_per_connection": 4, 00:04:17.189 "pdu_pool_size": 36864, 00:04:17.189 "immediate_data_pool_size": 16384, 00:04:17.189 "data_out_pool_size": 2048 00:04:17.189 } 00:04:17.189 } 00:04:17.189 ] 00:04:17.189 } 00:04:17.189 ] 00:04:17.189 } 00:04:17.189 03:07:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:17.189 03:07:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56963 00:04:17.189 03:07:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 56963 ']' 00:04:17.189 03:07:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 56963 00:04:17.189 03:07:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:17.189 03:07:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:17.189 03:07:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56963 00:04:17.189 03:07:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:17.189 03:07:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:17.189 killing process with pid 56963 00:04:17.189 03:07:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56963' 00:04:17.189 03:07:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 56963 00:04:17.189 03:07:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 56963 00:04:17.756 03:07:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=56988 00:04:17.756 03:07:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:17.756 03:07:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:23.026 03:07:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 56988 00:04:23.026 03:07:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 56988 ']' 00:04:23.026 03:07:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 56988 00:04:23.026 03:07:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:23.026 03:07:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:23.026 03:07:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56988 00:04:23.026 03:07:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:23.026 03:07:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:23.026 killing process with pid 56988 00:04:23.026 03:07:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56988' 00:04:23.026 03:07:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 56988 00:04:23.026 03:07:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 56988 00:04:23.026 03:07:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:23.026 03:07:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:23.026 00:04:23.026 real 0m6.648s 00:04:23.026 user 0m6.190s 00:04:23.026 sys 0m0.640s 00:04:23.026 03:07:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.026 03:07:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:23.026 ************************************ 00:04:23.026 END TEST skip_rpc_with_json 00:04:23.026 ************************************ 00:04:23.026 03:07:06 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:23.026 03:07:06 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:23.026 03:07:06 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:23.026 03:07:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.285 ************************************ 00:04:23.285 START TEST skip_rpc_with_delay 00:04:23.285 ************************************ 00:04:23.285 03:07:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:23.285 03:07:06 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:23.285 03:07:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:23.285 03:07:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:23.285 03:07:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.285 03:07:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:23.285 03:07:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.285 03:07:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:23.285 03:07:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.285 03:07:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:23.285 03:07:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.285 03:07:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:23.285 03:07:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:23.285 [2024-10-09 03:07:06.406635] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:23.285 [2024-10-09 03:07:06.406761] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:23.285 03:07:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:23.285 03:07:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:23.285 03:07:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:23.285 03:07:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:23.285 00:04:23.285 real 0m0.090s 00:04:23.285 user 0m0.057s 00:04:23.285 sys 0m0.032s 00:04:23.285 03:07:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.285 03:07:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:23.285 ************************************ 00:04:23.285 END TEST skip_rpc_with_delay 00:04:23.285 ************************************ 00:04:23.285 03:07:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:23.285 03:07:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:23.285 03:07:06 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:23.285 03:07:06 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:23.285 03:07:06 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:23.285 03:07:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.285 ************************************ 00:04:23.285 START TEST exit_on_failed_rpc_init 00:04:23.285 ************************************ 00:04:23.285 03:07:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:23.285 03:07:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57098 00:04:23.285 03:07:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:23.285 03:07:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57098 00:04:23.285 03:07:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57098 ']' 00:04:23.285 03:07:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.285 03:07:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:23.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.285 03:07:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.286 03:07:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:23.286 03:07:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:23.286 [2024-10-09 03:07:06.548501] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:04:23.286 [2024-10-09 03:07:06.548606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57098 ] 00:04:23.545 [2024-10-09 03:07:06.682878] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.545 [2024-10-09 03:07:06.798792] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.803 [2024-10-09 03:07:06.873518] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:24.376 03:07:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:24.376 03:07:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:24.376 03:07:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:24.376 03:07:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:24.376 03:07:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:24.376 03:07:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:24.376 03:07:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:24.376 03:07:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:24.376 03:07:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:24.376 03:07:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:24.376 03:07:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:24.376 03:07:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:24.376 03:07:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:24.376 03:07:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:24.376 03:07:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:24.376 [2024-10-09 03:07:07.621038] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:04:24.376 [2024-10-09 03:07:07.621158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57116 ] 00:04:24.688 [2024-10-09 03:07:07.760342] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.688 [2024-10-09 03:07:07.876357] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:24.688 [2024-10-09 03:07:07.876497] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:24.688 [2024-10-09 03:07:07.876512] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:24.688 [2024-10-09 03:07:07.876520] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:24.688 03:07:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:24.688 03:07:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:24.688 03:07:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:24.688 03:07:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:24.688 03:07:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:24.688 03:07:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:24.688 03:07:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:24.688 03:07:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57098 00:04:24.688 03:07:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57098 ']' 00:04:24.688 03:07:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57098 00:04:24.688 03:07:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:24.688 03:07:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:24.688 03:07:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57098 00:04:24.946 03:07:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:24.946 03:07:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:24.946 killing process with pid 57098 00:04:24.947 03:07:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57098' 00:04:24.947 03:07:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57098 00:04:24.947 03:07:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57098 00:04:25.205 00:04:25.205 real 0m1.959s 00:04:25.205 user 0m2.276s 00:04:25.205 sys 0m0.476s 00:04:25.205 03:07:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.205 03:07:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:25.205 ************************************ 00:04:25.205 END TEST exit_on_failed_rpc_init 00:04:25.205 ************************************ 00:04:25.205 03:07:08 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:25.205 00:04:25.205 real 0m14.561s 00:04:25.205 user 0m13.773s 00:04:25.205 sys 0m1.669s 00:04:25.205 03:07:08 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.205 03:07:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.205 ************************************ 00:04:25.205 END TEST skip_rpc 00:04:25.205 ************************************ 00:04:25.464 03:07:08 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:25.464 03:07:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.464 03:07:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.464 03:07:08 -- common/autotest_common.sh@10 -- # set +x 00:04:25.464 ************************************ 00:04:25.464 START TEST rpc_client 00:04:25.464 ************************************ 00:04:25.464 03:07:08 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:25.464 * Looking for test storage... 00:04:25.464 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:25.464 03:07:08 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:25.464 03:07:08 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:04:25.464 03:07:08 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:25.464 03:07:08 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:25.464 03:07:08 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:25.464 03:07:08 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:25.464 03:07:08 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:25.464 03:07:08 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:25.464 03:07:08 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:25.464 03:07:08 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:25.464 03:07:08 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:25.464 03:07:08 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:25.464 03:07:08 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:25.464 03:07:08 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:25.464 03:07:08 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:25.464 03:07:08 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:25.464 03:07:08 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:25.464 03:07:08 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:25.464 03:07:08 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:25.464 03:07:08 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:25.464 03:07:08 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:25.464 03:07:08 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:25.464 03:07:08 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:25.464 03:07:08 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:25.464 03:07:08 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:25.464 03:07:08 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:25.464 03:07:08 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:25.464 03:07:08 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:25.464 03:07:08 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:25.464 03:07:08 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:25.464 03:07:08 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:25.464 03:07:08 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:25.464 03:07:08 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:25.464 03:07:08 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:25.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.464 --rc genhtml_branch_coverage=1 00:04:25.464 --rc genhtml_function_coverage=1 00:04:25.464 --rc genhtml_legend=1 00:04:25.464 --rc geninfo_all_blocks=1 00:04:25.464 --rc geninfo_unexecuted_blocks=1 00:04:25.464 00:04:25.464 ' 00:04:25.464 03:07:08 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:25.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.464 --rc genhtml_branch_coverage=1 00:04:25.464 --rc genhtml_function_coverage=1 00:04:25.464 --rc genhtml_legend=1 00:04:25.464 --rc geninfo_all_blocks=1 00:04:25.464 --rc geninfo_unexecuted_blocks=1 00:04:25.464 00:04:25.464 ' 00:04:25.464 03:07:08 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:25.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.464 --rc genhtml_branch_coverage=1 00:04:25.464 --rc genhtml_function_coverage=1 00:04:25.464 --rc genhtml_legend=1 00:04:25.464 --rc geninfo_all_blocks=1 00:04:25.464 --rc geninfo_unexecuted_blocks=1 00:04:25.464 00:04:25.464 ' 00:04:25.464 03:07:08 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:25.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.464 --rc genhtml_branch_coverage=1 00:04:25.464 --rc genhtml_function_coverage=1 00:04:25.464 --rc genhtml_legend=1 00:04:25.464 --rc geninfo_all_blocks=1 00:04:25.464 --rc geninfo_unexecuted_blocks=1 00:04:25.464 00:04:25.464 ' 00:04:25.464 03:07:08 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:25.464 OK 00:04:25.464 03:07:08 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:25.464 00:04:25.464 real 0m0.224s 00:04:25.464 user 0m0.147s 00:04:25.464 sys 0m0.083s 00:04:25.464 ************************************ 00:04:25.464 END TEST rpc_client 00:04:25.464 ************************************ 00:04:25.464 03:07:08 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.464 03:07:08 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:25.724 03:07:08 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:25.724 03:07:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.724 03:07:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.724 03:07:08 -- common/autotest_common.sh@10 -- # set +x 00:04:25.724 ************************************ 00:04:25.724 START TEST json_config 00:04:25.724 ************************************ 00:04:25.724 03:07:08 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:25.724 03:07:08 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:25.724 03:07:08 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:04:25.724 03:07:08 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:25.724 03:07:08 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:25.724 03:07:08 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:25.724 03:07:08 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:25.724 03:07:08 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:25.724 03:07:08 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:25.724 03:07:08 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:25.724 03:07:08 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:25.724 03:07:08 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:25.724 03:07:08 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:25.724 03:07:08 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:25.724 03:07:08 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:25.724 03:07:08 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:25.724 03:07:08 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:25.724 03:07:08 json_config -- scripts/common.sh@345 -- # : 1 00:04:25.724 03:07:08 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:25.724 03:07:08 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:25.724 03:07:08 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:25.724 03:07:08 json_config -- scripts/common.sh@353 -- # local d=1 00:04:25.724 03:07:08 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:25.724 03:07:08 json_config -- scripts/common.sh@355 -- # echo 1 00:04:25.724 03:07:08 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:25.724 03:07:08 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:25.724 03:07:08 json_config -- scripts/common.sh@353 -- # local d=2 00:04:25.724 03:07:08 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:25.724 03:07:08 json_config -- scripts/common.sh@355 -- # echo 2 00:04:25.724 03:07:08 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:25.724 03:07:08 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:25.724 03:07:08 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:25.724 03:07:08 json_config -- scripts/common.sh@368 -- # return 0 00:04:25.724 03:07:08 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:25.724 03:07:08 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:25.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.724 --rc genhtml_branch_coverage=1 00:04:25.724 --rc genhtml_function_coverage=1 00:04:25.724 --rc genhtml_legend=1 00:04:25.724 --rc geninfo_all_blocks=1 00:04:25.724 --rc geninfo_unexecuted_blocks=1 00:04:25.724 00:04:25.724 ' 00:04:25.724 03:07:08 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:25.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.724 --rc genhtml_branch_coverage=1 00:04:25.724 --rc genhtml_function_coverage=1 00:04:25.724 --rc genhtml_legend=1 00:04:25.724 --rc geninfo_all_blocks=1 00:04:25.724 --rc geninfo_unexecuted_blocks=1 00:04:25.724 00:04:25.724 ' 00:04:25.724 03:07:08 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:25.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.724 --rc genhtml_branch_coverage=1 00:04:25.724 --rc genhtml_function_coverage=1 00:04:25.724 --rc genhtml_legend=1 00:04:25.724 --rc geninfo_all_blocks=1 00:04:25.724 --rc geninfo_unexecuted_blocks=1 00:04:25.724 00:04:25.724 ' 00:04:25.724 03:07:08 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:25.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.724 --rc genhtml_branch_coverage=1 00:04:25.724 --rc genhtml_function_coverage=1 00:04:25.724 --rc genhtml_legend=1 00:04:25.724 --rc geninfo_all_blocks=1 00:04:25.724 --rc geninfo_unexecuted_blocks=1 00:04:25.724 00:04:25.724 ' 00:04:25.724 03:07:08 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:25.724 03:07:08 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:25.724 03:07:08 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:25.724 03:07:08 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:25.724 03:07:08 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:25.724 03:07:08 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:25.724 03:07:08 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:25.724 03:07:08 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:25.724 03:07:08 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:25.724 03:07:08 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:25.724 03:07:08 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:25.725 03:07:08 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:25.725 03:07:08 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:04:25.725 03:07:08 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:04:25.725 03:07:08 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:25.725 03:07:08 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:25.725 03:07:08 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:25.725 03:07:08 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:25.725 03:07:08 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:25.725 03:07:08 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:25.725 03:07:08 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:25.725 03:07:08 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:25.725 03:07:08 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:25.725 03:07:08 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.725 03:07:08 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.725 03:07:08 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.725 03:07:08 json_config -- paths/export.sh@5 -- # export PATH 00:04:25.725 03:07:08 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.725 03:07:08 json_config -- nvmf/common.sh@51 -- # : 0 00:04:25.725 03:07:08 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:25.725 03:07:08 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:25.725 03:07:08 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:25.725 03:07:08 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:25.725 03:07:08 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:25.725 03:07:08 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:25.725 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:25.725 03:07:08 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:25.725 03:07:08 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:25.725 03:07:08 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:25.725 03:07:08 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:25.725 03:07:08 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:25.725 03:07:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:25.725 03:07:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:25.725 03:07:08 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:25.725 03:07:08 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:25.725 03:07:08 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:25.725 03:07:08 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:25.725 03:07:08 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:25.725 03:07:09 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:25.725 INFO: JSON configuration test init 00:04:25.725 03:07:09 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:25.725 03:07:09 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:25.725 03:07:09 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:25.725 03:07:09 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:25.725 03:07:09 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:25.725 03:07:09 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:25.725 03:07:09 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:25.725 03:07:09 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:25.725 03:07:09 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:25.725 03:07:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.725 03:07:09 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:25.725 03:07:09 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:25.725 03:07:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.725 03:07:09 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:25.725 03:07:09 json_config -- json_config/common.sh@9 -- # local app=target 00:04:25.725 03:07:09 json_config -- json_config/common.sh@10 -- # shift 00:04:25.725 03:07:09 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:25.725 03:07:09 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:25.725 03:07:09 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:25.725 03:07:09 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:25.725 03:07:09 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:25.725 03:07:09 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57255 00:04:25.725 03:07:09 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:25.725 Waiting for target to run... 00:04:25.725 03:07:09 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:25.725 03:07:09 json_config -- json_config/common.sh@25 -- # waitforlisten 57255 /var/tmp/spdk_tgt.sock 00:04:25.725 03:07:09 json_config -- common/autotest_common.sh@831 -- # '[' -z 57255 ']' 00:04:25.725 03:07:09 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:25.725 03:07:09 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:25.725 03:07:09 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:25.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:25.725 03:07:09 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:25.725 03:07:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.984 [2024-10-09 03:07:09.085806] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:04:25.984 [2024-10-09 03:07:09.086153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57255 ] 00:04:26.243 [2024-10-09 03:07:09.536703] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.502 [2024-10-09 03:07:09.617002] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.070 00:04:27.070 03:07:10 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:27.070 03:07:10 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:27.070 03:07:10 json_config -- json_config/common.sh@26 -- # echo '' 00:04:27.070 03:07:10 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:27.070 03:07:10 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:27.070 03:07:10 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:27.070 03:07:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.070 03:07:10 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:27.070 03:07:10 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:27.070 03:07:10 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:27.070 03:07:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.070 03:07:10 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:27.070 03:07:10 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:27.070 03:07:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:27.329 [2024-10-09 03:07:10.469011] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:27.587 03:07:10 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:27.587 03:07:10 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:27.587 03:07:10 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:27.587 03:07:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.587 03:07:10 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:27.587 03:07:10 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:27.587 03:07:10 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:27.587 03:07:10 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:27.587 03:07:10 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:27.587 03:07:10 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:27.587 03:07:10 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:27.587 03:07:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:27.846 03:07:10 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:27.846 03:07:10 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:27.846 03:07:10 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:27.846 03:07:10 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:27.846 03:07:10 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:27.846 03:07:10 json_config -- json_config/json_config.sh@54 -- # sort 00:04:27.846 03:07:10 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:27.846 03:07:10 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:27.846 03:07:10 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:27.846 03:07:10 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:27.846 03:07:10 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:27.846 03:07:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.846 03:07:10 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:27.846 03:07:10 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:27.846 03:07:10 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:27.846 03:07:10 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:27.846 03:07:10 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:27.846 03:07:10 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:27.846 03:07:10 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:27.846 03:07:10 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:27.846 03:07:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.846 03:07:10 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:27.846 03:07:10 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:27.846 03:07:10 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:27.847 03:07:10 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:27.847 03:07:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:28.105 MallocForNvmf0 00:04:28.105 03:07:11 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:28.105 03:07:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:28.365 MallocForNvmf1 00:04:28.365 03:07:11 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:28.365 03:07:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:28.623 [2024-10-09 03:07:11.859477] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:28.623 03:07:11 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:28.623 03:07:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:28.882 03:07:12 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:28.882 03:07:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:29.449 03:07:12 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:29.449 03:07:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:29.709 03:07:12 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:29.709 03:07:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:29.968 [2024-10-09 03:07:13.036407] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:29.968 03:07:13 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:29.968 03:07:13 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:29.968 03:07:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.968 03:07:13 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:29.968 03:07:13 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:29.968 03:07:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.968 03:07:13 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:29.968 03:07:13 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:29.968 03:07:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:30.226 MallocBdevForConfigChangeCheck 00:04:30.226 03:07:13 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:30.226 03:07:13 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:30.226 03:07:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.226 03:07:13 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:30.226 03:07:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:30.794 INFO: shutting down applications... 00:04:30.794 03:07:13 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:30.794 03:07:13 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:30.794 03:07:13 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:30.794 03:07:13 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:30.794 03:07:13 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:31.054 Calling clear_iscsi_subsystem 00:04:31.054 Calling clear_nvmf_subsystem 00:04:31.054 Calling clear_nbd_subsystem 00:04:31.054 Calling clear_ublk_subsystem 00:04:31.054 Calling clear_vhost_blk_subsystem 00:04:31.054 Calling clear_vhost_scsi_subsystem 00:04:31.054 Calling clear_bdev_subsystem 00:04:31.054 03:07:14 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:31.054 03:07:14 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:31.054 03:07:14 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:31.054 03:07:14 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:31.054 03:07:14 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:31.054 03:07:14 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:31.622 03:07:14 json_config -- json_config/json_config.sh@352 -- # break 00:04:31.622 03:07:14 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:31.622 03:07:14 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:31.622 03:07:14 json_config -- json_config/common.sh@31 -- # local app=target 00:04:31.622 03:07:14 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:31.622 03:07:14 json_config -- json_config/common.sh@35 -- # [[ -n 57255 ]] 00:04:31.622 03:07:14 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57255 00:04:31.622 03:07:14 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:31.622 03:07:14 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:31.622 03:07:14 json_config -- json_config/common.sh@41 -- # kill -0 57255 00:04:31.622 03:07:14 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:32.190 03:07:15 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:32.190 03:07:15 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:32.190 03:07:15 json_config -- json_config/common.sh@41 -- # kill -0 57255 00:04:32.190 03:07:15 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:32.190 03:07:15 json_config -- json_config/common.sh@43 -- # break 00:04:32.190 03:07:15 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:32.190 03:07:15 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:32.190 SPDK target shutdown done 00:04:32.190 INFO: relaunching applications... 00:04:32.190 03:07:15 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:32.190 03:07:15 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:32.190 03:07:15 json_config -- json_config/common.sh@9 -- # local app=target 00:04:32.190 03:07:15 json_config -- json_config/common.sh@10 -- # shift 00:04:32.190 03:07:15 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:32.190 03:07:15 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:32.190 03:07:15 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:32.190 03:07:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:32.190 03:07:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:32.190 03:07:15 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57452 00:04:32.190 03:07:15 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:32.190 Waiting for target to run... 00:04:32.190 03:07:15 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:32.190 03:07:15 json_config -- json_config/common.sh@25 -- # waitforlisten 57452 /var/tmp/spdk_tgt.sock 00:04:32.190 03:07:15 json_config -- common/autotest_common.sh@831 -- # '[' -z 57452 ']' 00:04:32.190 03:07:15 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:32.190 03:07:15 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:32.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:32.190 03:07:15 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:32.190 03:07:15 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:32.190 03:07:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.190 [2024-10-09 03:07:15.263295] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:04:32.190 [2024-10-09 03:07:15.263428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57452 ] 00:04:32.449 [2024-10-09 03:07:15.707014] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.708 [2024-10-09 03:07:15.802033] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.708 [2024-10-09 03:07:15.938005] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:32.967 [2024-10-09 03:07:16.152736] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:32.967 [2024-10-09 03:07:16.184810] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:33.226 00:04:33.226 INFO: Checking if target configuration is the same... 00:04:33.226 03:07:16 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:33.226 03:07:16 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:33.226 03:07:16 json_config -- json_config/common.sh@26 -- # echo '' 00:04:33.226 03:07:16 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:33.226 03:07:16 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:33.226 03:07:16 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:33.226 03:07:16 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:33.226 03:07:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:33.226 + '[' 2 -ne 2 ']' 00:04:33.226 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:33.226 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:33.226 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:33.226 +++ basename /dev/fd/62 00:04:33.226 ++ mktemp /tmp/62.XXX 00:04:33.226 + tmp_file_1=/tmp/62.WNW 00:04:33.226 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:33.226 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:33.226 + tmp_file_2=/tmp/spdk_tgt_config.json.0up 00:04:33.226 + ret=0 00:04:33.226 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:33.484 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:33.743 + diff -u /tmp/62.WNW /tmp/spdk_tgt_config.json.0up 00:04:33.743 INFO: JSON config files are the same 00:04:33.743 + echo 'INFO: JSON config files are the same' 00:04:33.743 + rm /tmp/62.WNW /tmp/spdk_tgt_config.json.0up 00:04:33.743 + exit 0 00:04:33.743 INFO: changing configuration and checking if this can be detected... 00:04:33.743 03:07:16 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:33.743 03:07:16 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:33.743 03:07:16 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:33.743 03:07:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:34.002 03:07:17 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:34.002 03:07:17 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:34.002 03:07:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:34.002 + '[' 2 -ne 2 ']' 00:04:34.002 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:34.002 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:34.002 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:34.002 +++ basename /dev/fd/62 00:04:34.002 ++ mktemp /tmp/62.XXX 00:04:34.002 + tmp_file_1=/tmp/62.0J2 00:04:34.002 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:34.002 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:34.002 + tmp_file_2=/tmp/spdk_tgt_config.json.rtp 00:04:34.002 + ret=0 00:04:34.002 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:34.570 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:34.570 + diff -u /tmp/62.0J2 /tmp/spdk_tgt_config.json.rtp 00:04:34.570 + ret=1 00:04:34.570 + echo '=== Start of file: /tmp/62.0J2 ===' 00:04:34.570 + cat /tmp/62.0J2 00:04:34.570 + echo '=== End of file: /tmp/62.0J2 ===' 00:04:34.570 + echo '' 00:04:34.570 + echo '=== Start of file: /tmp/spdk_tgt_config.json.rtp ===' 00:04:34.570 + cat /tmp/spdk_tgt_config.json.rtp 00:04:34.570 + echo '=== End of file: /tmp/spdk_tgt_config.json.rtp ===' 00:04:34.570 + echo '' 00:04:34.570 + rm /tmp/62.0J2 /tmp/spdk_tgt_config.json.rtp 00:04:34.570 + exit 1 00:04:34.570 INFO: configuration change detected. 00:04:34.571 03:07:17 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:34.571 03:07:17 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:34.571 03:07:17 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:34.571 03:07:17 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:34.571 03:07:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.571 03:07:17 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:34.571 03:07:17 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:34.571 03:07:17 json_config -- json_config/json_config.sh@324 -- # [[ -n 57452 ]] 00:04:34.571 03:07:17 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:34.571 03:07:17 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:34.571 03:07:17 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:34.571 03:07:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.571 03:07:17 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:34.571 03:07:17 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:34.571 03:07:17 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:34.571 03:07:17 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:34.571 03:07:17 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:34.571 03:07:17 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:34.571 03:07:17 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:34.571 03:07:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.571 03:07:17 json_config -- json_config/json_config.sh@330 -- # killprocess 57452 00:04:34.571 03:07:17 json_config -- common/autotest_common.sh@950 -- # '[' -z 57452 ']' 00:04:34.571 03:07:17 json_config -- common/autotest_common.sh@954 -- # kill -0 57452 00:04:34.571 03:07:17 json_config -- common/autotest_common.sh@955 -- # uname 00:04:34.571 03:07:17 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:34.571 03:07:17 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57452 00:04:34.571 killing process with pid 57452 00:04:34.571 03:07:17 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:34.571 03:07:17 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:34.571 03:07:17 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57452' 00:04:34.571 03:07:17 json_config -- common/autotest_common.sh@969 -- # kill 57452 00:04:34.571 03:07:17 json_config -- common/autotest_common.sh@974 -- # wait 57452 00:04:34.872 03:07:18 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:34.872 03:07:18 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:34.872 03:07:18 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:34.872 03:07:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.872 INFO: Success 00:04:34.872 03:07:18 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:34.872 03:07:18 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:34.872 ************************************ 00:04:34.872 END TEST json_config 00:04:34.872 ************************************ 00:04:34.872 00:04:34.872 real 0m9.355s 00:04:34.872 user 0m13.529s 00:04:34.872 sys 0m1.998s 00:04:34.872 03:07:18 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.872 03:07:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.131 03:07:18 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:35.131 03:07:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.131 03:07:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.131 03:07:18 -- common/autotest_common.sh@10 -- # set +x 00:04:35.131 ************************************ 00:04:35.131 START TEST json_config_extra_key 00:04:35.131 ************************************ 00:04:35.131 03:07:18 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:35.131 03:07:18 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:35.131 03:07:18 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:04:35.131 03:07:18 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:35.131 03:07:18 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:35.131 03:07:18 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.131 03:07:18 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.131 03:07:18 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.131 03:07:18 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.131 03:07:18 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.131 03:07:18 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.131 03:07:18 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.131 03:07:18 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.131 03:07:18 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.131 03:07:18 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.131 03:07:18 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.131 03:07:18 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:35.131 03:07:18 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:35.131 03:07:18 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.131 03:07:18 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.131 03:07:18 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:35.131 03:07:18 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:35.131 03:07:18 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.131 03:07:18 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:35.131 03:07:18 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.131 03:07:18 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:35.131 03:07:18 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:35.131 03:07:18 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.131 03:07:18 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:35.131 03:07:18 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.131 03:07:18 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.131 03:07:18 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.131 03:07:18 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:35.131 03:07:18 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.131 03:07:18 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:35.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.131 --rc genhtml_branch_coverage=1 00:04:35.131 --rc genhtml_function_coverage=1 00:04:35.131 --rc genhtml_legend=1 00:04:35.131 --rc geninfo_all_blocks=1 00:04:35.131 --rc geninfo_unexecuted_blocks=1 00:04:35.131 00:04:35.131 ' 00:04:35.131 03:07:18 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:35.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.131 --rc genhtml_branch_coverage=1 00:04:35.131 --rc genhtml_function_coverage=1 00:04:35.131 --rc genhtml_legend=1 00:04:35.131 --rc geninfo_all_blocks=1 00:04:35.131 --rc geninfo_unexecuted_blocks=1 00:04:35.131 00:04:35.131 ' 00:04:35.131 03:07:18 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:35.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.131 --rc genhtml_branch_coverage=1 00:04:35.131 --rc genhtml_function_coverage=1 00:04:35.131 --rc genhtml_legend=1 00:04:35.131 --rc geninfo_all_blocks=1 00:04:35.131 --rc geninfo_unexecuted_blocks=1 00:04:35.131 00:04:35.131 ' 00:04:35.131 03:07:18 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:35.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.131 --rc genhtml_branch_coverage=1 00:04:35.131 --rc genhtml_function_coverage=1 00:04:35.131 --rc genhtml_legend=1 00:04:35.131 --rc geninfo_all_blocks=1 00:04:35.131 --rc geninfo_unexecuted_blocks=1 00:04:35.131 00:04:35.131 ' 00:04:35.131 03:07:18 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:35.131 03:07:18 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:35.131 03:07:18 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:35.131 03:07:18 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:35.131 03:07:18 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:35.131 03:07:18 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:35.131 03:07:18 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:35.131 03:07:18 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:35.131 03:07:18 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:35.131 03:07:18 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:35.131 03:07:18 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:35.131 03:07:18 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:35.131 03:07:18 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:04:35.131 03:07:18 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:04:35.131 03:07:18 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:35.131 03:07:18 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:35.131 03:07:18 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:35.131 03:07:18 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:35.131 03:07:18 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:35.131 03:07:18 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:35.131 03:07:18 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:35.131 03:07:18 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:35.131 03:07:18 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:35.131 03:07:18 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.131 03:07:18 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.132 03:07:18 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.132 03:07:18 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:35.132 03:07:18 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.132 03:07:18 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:35.132 03:07:18 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:35.132 03:07:18 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:35.132 03:07:18 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:35.132 03:07:18 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:35.132 03:07:18 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:35.132 03:07:18 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:35.132 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:35.132 03:07:18 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:35.132 03:07:18 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:35.132 03:07:18 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:35.132 03:07:18 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:35.132 03:07:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:35.132 INFO: launching applications... 00:04:35.132 03:07:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:35.132 03:07:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:35.132 03:07:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:35.132 03:07:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:35.132 03:07:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:35.132 03:07:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:35.132 03:07:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:35.132 03:07:18 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:35.132 03:07:18 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:35.132 03:07:18 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:35.132 03:07:18 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:35.132 03:07:18 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:35.132 03:07:18 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:35.132 03:07:18 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:35.132 03:07:18 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:35.132 03:07:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:35.132 03:07:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:35.132 03:07:18 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57610 00:04:35.132 03:07:18 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:35.132 Waiting for target to run... 00:04:35.132 03:07:18 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57610 /var/tmp/spdk_tgt.sock 00:04:35.132 03:07:18 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:35.132 03:07:18 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 57610 ']' 00:04:35.132 03:07:18 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:35.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:35.132 03:07:18 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:35.132 03:07:18 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:35.132 03:07:18 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:35.132 03:07:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:35.391 [2024-10-09 03:07:18.480527] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:04:35.391 [2024-10-09 03:07:18.480963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57610 ] 00:04:35.651 [2024-10-09 03:07:18.912345] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.911 [2024-10-09 03:07:19.005390] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.911 [2024-10-09 03:07:19.040550] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:36.477 00:04:36.477 INFO: shutting down applications... 00:04:36.477 03:07:19 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:36.477 03:07:19 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:36.477 03:07:19 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:36.477 03:07:19 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:36.477 03:07:19 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:36.477 03:07:19 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:36.477 03:07:19 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:36.477 03:07:19 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57610 ]] 00:04:36.477 03:07:19 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57610 00:04:36.477 03:07:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:36.477 03:07:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:36.477 03:07:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57610 00:04:36.477 03:07:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:37.046 03:07:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:37.046 03:07:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:37.046 03:07:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57610 00:04:37.046 03:07:20 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:37.046 03:07:20 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:37.046 03:07:20 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:37.046 03:07:20 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:37.046 SPDK target shutdown done 00:04:37.046 03:07:20 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:37.046 Success 00:04:37.046 00:04:37.046 real 0m1.843s 00:04:37.046 user 0m1.831s 00:04:37.046 sys 0m0.472s 00:04:37.046 03:07:20 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.046 03:07:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:37.046 ************************************ 00:04:37.046 END TEST json_config_extra_key 00:04:37.046 ************************************ 00:04:37.046 03:07:20 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:37.046 03:07:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.046 03:07:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.046 03:07:20 -- common/autotest_common.sh@10 -- # set +x 00:04:37.046 ************************************ 00:04:37.046 START TEST alias_rpc 00:04:37.046 ************************************ 00:04:37.046 03:07:20 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:37.046 * Looking for test storage... 00:04:37.046 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:37.046 03:07:20 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:37.046 03:07:20 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:37.046 03:07:20 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:37.046 03:07:20 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:37.046 03:07:20 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.046 03:07:20 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.046 03:07:20 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.046 03:07:20 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.046 03:07:20 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.046 03:07:20 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.046 03:07:20 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.046 03:07:20 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.046 03:07:20 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.046 03:07:20 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.046 03:07:20 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.046 03:07:20 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:37.046 03:07:20 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:37.046 03:07:20 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.046 03:07:20 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.046 03:07:20 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:37.046 03:07:20 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:37.046 03:07:20 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.046 03:07:20 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:37.046 03:07:20 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.046 03:07:20 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:37.046 03:07:20 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:37.046 03:07:20 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.046 03:07:20 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:37.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.046 03:07:20 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.046 03:07:20 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.046 03:07:20 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.046 03:07:20 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:37.046 03:07:20 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.046 03:07:20 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:37.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.046 --rc genhtml_branch_coverage=1 00:04:37.046 --rc genhtml_function_coverage=1 00:04:37.046 --rc genhtml_legend=1 00:04:37.046 --rc geninfo_all_blocks=1 00:04:37.046 --rc geninfo_unexecuted_blocks=1 00:04:37.046 00:04:37.046 ' 00:04:37.046 03:07:20 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:37.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.046 --rc genhtml_branch_coverage=1 00:04:37.046 --rc genhtml_function_coverage=1 00:04:37.046 --rc genhtml_legend=1 00:04:37.046 --rc geninfo_all_blocks=1 00:04:37.046 --rc geninfo_unexecuted_blocks=1 00:04:37.046 00:04:37.046 ' 00:04:37.046 03:07:20 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:37.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.046 --rc genhtml_branch_coverage=1 00:04:37.046 --rc genhtml_function_coverage=1 00:04:37.046 --rc genhtml_legend=1 00:04:37.046 --rc geninfo_all_blocks=1 00:04:37.046 --rc geninfo_unexecuted_blocks=1 00:04:37.046 00:04:37.046 ' 00:04:37.046 03:07:20 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:37.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.046 --rc genhtml_branch_coverage=1 00:04:37.046 --rc genhtml_function_coverage=1 00:04:37.046 --rc genhtml_legend=1 00:04:37.046 --rc geninfo_all_blocks=1 00:04:37.046 --rc geninfo_unexecuted_blocks=1 00:04:37.046 00:04:37.046 ' 00:04:37.046 03:07:20 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:37.046 03:07:20 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57683 00:04:37.046 03:07:20 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57683 00:04:37.046 03:07:20 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 57683 ']' 00:04:37.046 03:07:20 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.046 03:07:20 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.046 03:07:20 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:37.046 03:07:20 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.046 03:07:20 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:37.046 03:07:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.305 [2024-10-09 03:07:20.357085] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:04:37.305 [2024-10-09 03:07:20.357676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57683 ] 00:04:37.305 [2024-10-09 03:07:20.494654] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.305 [2024-10-09 03:07:20.601135] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.564 [2024-10-09 03:07:20.668483] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:38.132 03:07:21 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:38.132 03:07:21 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:38.132 03:07:21 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:38.390 03:07:21 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57683 00:04:38.390 03:07:21 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 57683 ']' 00:04:38.390 03:07:21 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 57683 00:04:38.390 03:07:21 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:38.390 03:07:21 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:38.390 03:07:21 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57683 00:04:38.390 killing process with pid 57683 00:04:38.390 03:07:21 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:38.390 03:07:21 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:38.390 03:07:21 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57683' 00:04:38.390 03:07:21 alias_rpc -- common/autotest_common.sh@969 -- # kill 57683 00:04:38.390 03:07:21 alias_rpc -- common/autotest_common.sh@974 -- # wait 57683 00:04:38.958 ************************************ 00:04:38.958 END TEST alias_rpc 00:04:38.958 ************************************ 00:04:38.958 00:04:38.958 real 0m1.939s 00:04:38.958 user 0m2.204s 00:04:38.958 sys 0m0.448s 00:04:38.958 03:07:22 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.958 03:07:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.958 03:07:22 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:38.958 03:07:22 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:38.958 03:07:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.958 03:07:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.958 03:07:22 -- common/autotest_common.sh@10 -- # set +x 00:04:38.958 ************************************ 00:04:38.958 START TEST spdkcli_tcp 00:04:38.958 ************************************ 00:04:38.958 03:07:22 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:38.958 * Looking for test storage... 00:04:38.958 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:38.958 03:07:22 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:38.958 03:07:22 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:04:38.958 03:07:22 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:39.217 03:07:22 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:39.217 03:07:22 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.217 03:07:22 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.217 03:07:22 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.217 03:07:22 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.217 03:07:22 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.217 03:07:22 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.217 03:07:22 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.217 03:07:22 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.217 03:07:22 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.217 03:07:22 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.217 03:07:22 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.217 03:07:22 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:39.217 03:07:22 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:39.217 03:07:22 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.217 03:07:22 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.217 03:07:22 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:39.217 03:07:22 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:39.217 03:07:22 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.217 03:07:22 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:39.217 03:07:22 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.217 03:07:22 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:39.217 03:07:22 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:39.217 03:07:22 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.217 03:07:22 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:39.217 03:07:22 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.218 03:07:22 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.218 03:07:22 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.218 03:07:22 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:39.218 03:07:22 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.218 03:07:22 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:39.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.218 --rc genhtml_branch_coverage=1 00:04:39.218 --rc genhtml_function_coverage=1 00:04:39.218 --rc genhtml_legend=1 00:04:39.218 --rc geninfo_all_blocks=1 00:04:39.218 --rc geninfo_unexecuted_blocks=1 00:04:39.218 00:04:39.218 ' 00:04:39.218 03:07:22 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:39.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.218 --rc genhtml_branch_coverage=1 00:04:39.218 --rc genhtml_function_coverage=1 00:04:39.218 --rc genhtml_legend=1 00:04:39.218 --rc geninfo_all_blocks=1 00:04:39.218 --rc geninfo_unexecuted_blocks=1 00:04:39.218 00:04:39.218 ' 00:04:39.218 03:07:22 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:39.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.218 --rc genhtml_branch_coverage=1 00:04:39.218 --rc genhtml_function_coverage=1 00:04:39.218 --rc genhtml_legend=1 00:04:39.218 --rc geninfo_all_blocks=1 00:04:39.218 --rc geninfo_unexecuted_blocks=1 00:04:39.218 00:04:39.218 ' 00:04:39.218 03:07:22 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:39.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.218 --rc genhtml_branch_coverage=1 00:04:39.218 --rc genhtml_function_coverage=1 00:04:39.218 --rc genhtml_legend=1 00:04:39.218 --rc geninfo_all_blocks=1 00:04:39.218 --rc geninfo_unexecuted_blocks=1 00:04:39.218 00:04:39.218 ' 00:04:39.218 03:07:22 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:39.218 03:07:22 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:39.218 03:07:22 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:39.218 03:07:22 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:39.218 03:07:22 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:39.218 03:07:22 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:39.218 03:07:22 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:39.218 03:07:22 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:39.218 03:07:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:39.218 03:07:22 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57767 00:04:39.218 03:07:22 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:39.218 03:07:22 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57767 00:04:39.218 03:07:22 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 57767 ']' 00:04:39.218 03:07:22 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.218 03:07:22 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:39.218 03:07:22 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.218 03:07:22 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:39.218 03:07:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:39.218 [2024-10-09 03:07:22.359421] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:04:39.218 [2024-10-09 03:07:22.359674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57767 ] 00:04:39.218 [2024-10-09 03:07:22.489811] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:39.476 [2024-10-09 03:07:22.592745] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.476 [2024-10-09 03:07:22.592754] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.476 [2024-10-09 03:07:22.659491] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:40.413 03:07:23 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:40.413 03:07:23 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:40.413 03:07:23 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:40.413 03:07:23 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57784 00:04:40.413 03:07:23 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:40.413 [ 00:04:40.413 "bdev_malloc_delete", 00:04:40.413 "bdev_malloc_create", 00:04:40.413 "bdev_null_resize", 00:04:40.413 "bdev_null_delete", 00:04:40.413 "bdev_null_create", 00:04:40.413 "bdev_nvme_cuse_unregister", 00:04:40.413 "bdev_nvme_cuse_register", 00:04:40.413 "bdev_opal_new_user", 00:04:40.413 "bdev_opal_set_lock_state", 00:04:40.413 "bdev_opal_delete", 00:04:40.413 "bdev_opal_get_info", 00:04:40.413 "bdev_opal_create", 00:04:40.413 "bdev_nvme_opal_revert", 00:04:40.413 "bdev_nvme_opal_init", 00:04:40.413 "bdev_nvme_send_cmd", 00:04:40.413 "bdev_nvme_set_keys", 00:04:40.413 "bdev_nvme_get_path_iostat", 00:04:40.413 "bdev_nvme_get_mdns_discovery_info", 00:04:40.413 "bdev_nvme_stop_mdns_discovery", 00:04:40.413 "bdev_nvme_start_mdns_discovery", 00:04:40.413 "bdev_nvme_set_multipath_policy", 00:04:40.413 "bdev_nvme_set_preferred_path", 00:04:40.413 "bdev_nvme_get_io_paths", 00:04:40.413 "bdev_nvme_remove_error_injection", 00:04:40.413 "bdev_nvme_add_error_injection", 00:04:40.413 "bdev_nvme_get_discovery_info", 00:04:40.413 "bdev_nvme_stop_discovery", 00:04:40.413 "bdev_nvme_start_discovery", 00:04:40.413 "bdev_nvme_get_controller_health_info", 00:04:40.413 "bdev_nvme_disable_controller", 00:04:40.413 "bdev_nvme_enable_controller", 00:04:40.413 "bdev_nvme_reset_controller", 00:04:40.413 "bdev_nvme_get_transport_statistics", 00:04:40.413 "bdev_nvme_apply_firmware", 00:04:40.413 "bdev_nvme_detach_controller", 00:04:40.413 "bdev_nvme_get_controllers", 00:04:40.413 "bdev_nvme_attach_controller", 00:04:40.413 "bdev_nvme_set_hotplug", 00:04:40.413 "bdev_nvme_set_options", 00:04:40.413 "bdev_passthru_delete", 00:04:40.413 "bdev_passthru_create", 00:04:40.413 "bdev_lvol_set_parent_bdev", 00:04:40.413 "bdev_lvol_set_parent", 00:04:40.413 "bdev_lvol_check_shallow_copy", 00:04:40.413 "bdev_lvol_start_shallow_copy", 00:04:40.413 "bdev_lvol_grow_lvstore", 00:04:40.413 "bdev_lvol_get_lvols", 00:04:40.413 "bdev_lvol_get_lvstores", 00:04:40.413 "bdev_lvol_delete", 00:04:40.413 "bdev_lvol_set_read_only", 00:04:40.413 "bdev_lvol_resize", 00:04:40.413 "bdev_lvol_decouple_parent", 00:04:40.413 "bdev_lvol_inflate", 00:04:40.413 "bdev_lvol_rename", 00:04:40.413 "bdev_lvol_clone_bdev", 00:04:40.413 "bdev_lvol_clone", 00:04:40.413 "bdev_lvol_snapshot", 00:04:40.413 "bdev_lvol_create", 00:04:40.413 "bdev_lvol_delete_lvstore", 00:04:40.413 "bdev_lvol_rename_lvstore", 00:04:40.413 "bdev_lvol_create_lvstore", 00:04:40.413 "bdev_raid_set_options", 00:04:40.413 "bdev_raid_remove_base_bdev", 00:04:40.413 "bdev_raid_add_base_bdev", 00:04:40.413 "bdev_raid_delete", 00:04:40.413 "bdev_raid_create", 00:04:40.413 "bdev_raid_get_bdevs", 00:04:40.413 "bdev_error_inject_error", 00:04:40.413 "bdev_error_delete", 00:04:40.413 "bdev_error_create", 00:04:40.413 "bdev_split_delete", 00:04:40.413 "bdev_split_create", 00:04:40.413 "bdev_delay_delete", 00:04:40.413 "bdev_delay_create", 00:04:40.413 "bdev_delay_update_latency", 00:04:40.413 "bdev_zone_block_delete", 00:04:40.413 "bdev_zone_block_create", 00:04:40.413 "blobfs_create", 00:04:40.413 "blobfs_detect", 00:04:40.413 "blobfs_set_cache_size", 00:04:40.413 "bdev_aio_delete", 00:04:40.413 "bdev_aio_rescan", 00:04:40.413 "bdev_aio_create", 00:04:40.413 "bdev_ftl_set_property", 00:04:40.413 "bdev_ftl_get_properties", 00:04:40.413 "bdev_ftl_get_stats", 00:04:40.413 "bdev_ftl_unmap", 00:04:40.413 "bdev_ftl_unload", 00:04:40.413 "bdev_ftl_delete", 00:04:40.413 "bdev_ftl_load", 00:04:40.413 "bdev_ftl_create", 00:04:40.413 "bdev_virtio_attach_controller", 00:04:40.413 "bdev_virtio_scsi_get_devices", 00:04:40.413 "bdev_virtio_detach_controller", 00:04:40.413 "bdev_virtio_blk_set_hotplug", 00:04:40.413 "bdev_iscsi_delete", 00:04:40.413 "bdev_iscsi_create", 00:04:40.413 "bdev_iscsi_set_options", 00:04:40.413 "bdev_uring_delete", 00:04:40.413 "bdev_uring_rescan", 00:04:40.413 "bdev_uring_create", 00:04:40.413 "accel_error_inject_error", 00:04:40.413 "ioat_scan_accel_module", 00:04:40.413 "dsa_scan_accel_module", 00:04:40.413 "iaa_scan_accel_module", 00:04:40.413 "keyring_file_remove_key", 00:04:40.413 "keyring_file_add_key", 00:04:40.413 "keyring_linux_set_options", 00:04:40.413 "fsdev_aio_delete", 00:04:40.413 "fsdev_aio_create", 00:04:40.413 "iscsi_get_histogram", 00:04:40.413 "iscsi_enable_histogram", 00:04:40.413 "iscsi_set_options", 00:04:40.413 "iscsi_get_auth_groups", 00:04:40.413 "iscsi_auth_group_remove_secret", 00:04:40.413 "iscsi_auth_group_add_secret", 00:04:40.413 "iscsi_delete_auth_group", 00:04:40.413 "iscsi_create_auth_group", 00:04:40.413 "iscsi_set_discovery_auth", 00:04:40.413 "iscsi_get_options", 00:04:40.413 "iscsi_target_node_request_logout", 00:04:40.413 "iscsi_target_node_set_redirect", 00:04:40.413 "iscsi_target_node_set_auth", 00:04:40.413 "iscsi_target_node_add_lun", 00:04:40.413 "iscsi_get_stats", 00:04:40.413 "iscsi_get_connections", 00:04:40.413 "iscsi_portal_group_set_auth", 00:04:40.413 "iscsi_start_portal_group", 00:04:40.413 "iscsi_delete_portal_group", 00:04:40.413 "iscsi_create_portal_group", 00:04:40.413 "iscsi_get_portal_groups", 00:04:40.413 "iscsi_delete_target_node", 00:04:40.413 "iscsi_target_node_remove_pg_ig_maps", 00:04:40.413 "iscsi_target_node_add_pg_ig_maps", 00:04:40.413 "iscsi_create_target_node", 00:04:40.413 "iscsi_get_target_nodes", 00:04:40.413 "iscsi_delete_initiator_group", 00:04:40.413 "iscsi_initiator_group_remove_initiators", 00:04:40.413 "iscsi_initiator_group_add_initiators", 00:04:40.413 "iscsi_create_initiator_group", 00:04:40.413 "iscsi_get_initiator_groups", 00:04:40.413 "nvmf_set_crdt", 00:04:40.413 "nvmf_set_config", 00:04:40.413 "nvmf_set_max_subsystems", 00:04:40.413 "nvmf_stop_mdns_prr", 00:04:40.413 "nvmf_publish_mdns_prr", 00:04:40.414 "nvmf_subsystem_get_listeners", 00:04:40.414 "nvmf_subsystem_get_qpairs", 00:04:40.414 "nvmf_subsystem_get_controllers", 00:04:40.414 "nvmf_get_stats", 00:04:40.414 "nvmf_get_transports", 00:04:40.414 "nvmf_create_transport", 00:04:40.414 "nvmf_get_targets", 00:04:40.414 "nvmf_delete_target", 00:04:40.414 "nvmf_create_target", 00:04:40.414 "nvmf_subsystem_allow_any_host", 00:04:40.414 "nvmf_subsystem_set_keys", 00:04:40.414 "nvmf_subsystem_remove_host", 00:04:40.414 "nvmf_subsystem_add_host", 00:04:40.414 "nvmf_ns_remove_host", 00:04:40.414 "nvmf_ns_add_host", 00:04:40.414 "nvmf_subsystem_remove_ns", 00:04:40.414 "nvmf_subsystem_set_ns_ana_group", 00:04:40.414 "nvmf_subsystem_add_ns", 00:04:40.414 "nvmf_subsystem_listener_set_ana_state", 00:04:40.414 "nvmf_discovery_get_referrals", 00:04:40.414 "nvmf_discovery_remove_referral", 00:04:40.414 "nvmf_discovery_add_referral", 00:04:40.414 "nvmf_subsystem_remove_listener", 00:04:40.414 "nvmf_subsystem_add_listener", 00:04:40.414 "nvmf_delete_subsystem", 00:04:40.414 "nvmf_create_subsystem", 00:04:40.414 "nvmf_get_subsystems", 00:04:40.414 "env_dpdk_get_mem_stats", 00:04:40.414 "nbd_get_disks", 00:04:40.414 "nbd_stop_disk", 00:04:40.414 "nbd_start_disk", 00:04:40.414 "ublk_recover_disk", 00:04:40.414 "ublk_get_disks", 00:04:40.414 "ublk_stop_disk", 00:04:40.414 "ublk_start_disk", 00:04:40.414 "ublk_destroy_target", 00:04:40.414 "ublk_create_target", 00:04:40.414 "virtio_blk_create_transport", 00:04:40.414 "virtio_blk_get_transports", 00:04:40.414 "vhost_controller_set_coalescing", 00:04:40.414 "vhost_get_controllers", 00:04:40.414 "vhost_delete_controller", 00:04:40.414 "vhost_create_blk_controller", 00:04:40.414 "vhost_scsi_controller_remove_target", 00:04:40.414 "vhost_scsi_controller_add_target", 00:04:40.414 "vhost_start_scsi_controller", 00:04:40.414 "vhost_create_scsi_controller", 00:04:40.414 "thread_set_cpumask", 00:04:40.414 "scheduler_set_options", 00:04:40.414 "framework_get_governor", 00:04:40.414 "framework_get_scheduler", 00:04:40.414 "framework_set_scheduler", 00:04:40.414 "framework_get_reactors", 00:04:40.414 "thread_get_io_channels", 00:04:40.414 "thread_get_pollers", 00:04:40.414 "thread_get_stats", 00:04:40.414 "framework_monitor_context_switch", 00:04:40.414 "spdk_kill_instance", 00:04:40.414 "log_enable_timestamps", 00:04:40.414 "log_get_flags", 00:04:40.414 "log_clear_flag", 00:04:40.414 "log_set_flag", 00:04:40.414 "log_get_level", 00:04:40.414 "log_set_level", 00:04:40.414 "log_get_print_level", 00:04:40.414 "log_set_print_level", 00:04:40.414 "framework_enable_cpumask_locks", 00:04:40.414 "framework_disable_cpumask_locks", 00:04:40.414 "framework_wait_init", 00:04:40.414 "framework_start_init", 00:04:40.414 "scsi_get_devices", 00:04:40.414 "bdev_get_histogram", 00:04:40.414 "bdev_enable_histogram", 00:04:40.414 "bdev_set_qos_limit", 00:04:40.414 "bdev_set_qd_sampling_period", 00:04:40.414 "bdev_get_bdevs", 00:04:40.414 "bdev_reset_iostat", 00:04:40.414 "bdev_get_iostat", 00:04:40.414 "bdev_examine", 00:04:40.414 "bdev_wait_for_examine", 00:04:40.414 "bdev_set_options", 00:04:40.414 "accel_get_stats", 00:04:40.414 "accel_set_options", 00:04:40.414 "accel_set_driver", 00:04:40.414 "accel_crypto_key_destroy", 00:04:40.414 "accel_crypto_keys_get", 00:04:40.414 "accel_crypto_key_create", 00:04:40.414 "accel_assign_opc", 00:04:40.414 "accel_get_module_info", 00:04:40.414 "accel_get_opc_assignments", 00:04:40.414 "vmd_rescan", 00:04:40.414 "vmd_remove_device", 00:04:40.414 "vmd_enable", 00:04:40.414 "sock_get_default_impl", 00:04:40.414 "sock_set_default_impl", 00:04:40.414 "sock_impl_set_options", 00:04:40.414 "sock_impl_get_options", 00:04:40.414 "iobuf_get_stats", 00:04:40.414 "iobuf_set_options", 00:04:40.414 "keyring_get_keys", 00:04:40.414 "framework_get_pci_devices", 00:04:40.414 "framework_get_config", 00:04:40.414 "framework_get_subsystems", 00:04:40.414 "fsdev_set_opts", 00:04:40.414 "fsdev_get_opts", 00:04:40.414 "trace_get_info", 00:04:40.414 "trace_get_tpoint_group_mask", 00:04:40.414 "trace_disable_tpoint_group", 00:04:40.414 "trace_enable_tpoint_group", 00:04:40.414 "trace_clear_tpoint_mask", 00:04:40.414 "trace_set_tpoint_mask", 00:04:40.414 "notify_get_notifications", 00:04:40.414 "notify_get_types", 00:04:40.414 "spdk_get_version", 00:04:40.414 "rpc_get_methods" 00:04:40.414 ] 00:04:40.414 03:07:23 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:40.414 03:07:23 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:40.414 03:07:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:40.414 03:07:23 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:40.414 03:07:23 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57767 00:04:40.414 03:07:23 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 57767 ']' 00:04:40.414 03:07:23 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 57767 00:04:40.414 03:07:23 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:40.414 03:07:23 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:40.414 03:07:23 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57767 00:04:40.414 killing process with pid 57767 00:04:40.414 03:07:23 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:40.414 03:07:23 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:40.414 03:07:23 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57767' 00:04:40.414 03:07:23 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 57767 00:04:40.414 03:07:23 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 57767 00:04:40.982 ************************************ 00:04:40.982 END TEST spdkcli_tcp 00:04:40.982 ************************************ 00:04:40.982 00:04:40.982 real 0m2.054s 00:04:40.982 user 0m3.769s 00:04:40.982 sys 0m0.507s 00:04:40.982 03:07:24 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.982 03:07:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:40.982 03:07:24 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:40.982 03:07:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.982 03:07:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.982 03:07:24 -- common/autotest_common.sh@10 -- # set +x 00:04:40.982 ************************************ 00:04:40.982 START TEST dpdk_mem_utility 00:04:40.982 ************************************ 00:04:40.982 03:07:24 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:41.242 * Looking for test storage... 00:04:41.242 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:41.242 03:07:24 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:41.242 03:07:24 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:04:41.242 03:07:24 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:41.242 03:07:24 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:41.242 03:07:24 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.242 03:07:24 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.242 03:07:24 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.242 03:07:24 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.242 03:07:24 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.242 03:07:24 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.242 03:07:24 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.242 03:07:24 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.242 03:07:24 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.242 03:07:24 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.242 03:07:24 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.242 03:07:24 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:41.242 03:07:24 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:41.242 03:07:24 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.242 03:07:24 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.242 03:07:24 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:41.242 03:07:24 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:41.242 03:07:24 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.242 03:07:24 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:41.242 03:07:24 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.242 03:07:24 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:41.242 03:07:24 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:41.242 03:07:24 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.242 03:07:24 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:41.242 03:07:24 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.242 03:07:24 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.242 03:07:24 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.242 03:07:24 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:41.242 03:07:24 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.242 03:07:24 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:41.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.242 --rc genhtml_branch_coverage=1 00:04:41.242 --rc genhtml_function_coverage=1 00:04:41.242 --rc genhtml_legend=1 00:04:41.242 --rc geninfo_all_blocks=1 00:04:41.242 --rc geninfo_unexecuted_blocks=1 00:04:41.242 00:04:41.242 ' 00:04:41.242 03:07:24 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:41.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.242 --rc genhtml_branch_coverage=1 00:04:41.242 --rc genhtml_function_coverage=1 00:04:41.242 --rc genhtml_legend=1 00:04:41.242 --rc geninfo_all_blocks=1 00:04:41.242 --rc geninfo_unexecuted_blocks=1 00:04:41.242 00:04:41.242 ' 00:04:41.242 03:07:24 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:41.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.242 --rc genhtml_branch_coverage=1 00:04:41.242 --rc genhtml_function_coverage=1 00:04:41.242 --rc genhtml_legend=1 00:04:41.242 --rc geninfo_all_blocks=1 00:04:41.242 --rc geninfo_unexecuted_blocks=1 00:04:41.242 00:04:41.242 ' 00:04:41.242 03:07:24 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:41.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.242 --rc genhtml_branch_coverage=1 00:04:41.242 --rc genhtml_function_coverage=1 00:04:41.242 --rc genhtml_legend=1 00:04:41.242 --rc geninfo_all_blocks=1 00:04:41.242 --rc geninfo_unexecuted_blocks=1 00:04:41.242 00:04:41.242 ' 00:04:41.242 03:07:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:41.242 03:07:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57866 00:04:41.242 03:07:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.242 03:07:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57866 00:04:41.242 03:07:24 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 57866 ']' 00:04:41.242 03:07:24 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.242 03:07:24 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:41.242 03:07:24 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.242 03:07:24 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:41.242 03:07:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:41.242 [2024-10-09 03:07:24.463865] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:04:41.242 [2024-10-09 03:07:24.464896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57866 ] 00:04:41.501 [2024-10-09 03:07:24.603498] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.501 [2024-10-09 03:07:24.694110] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.501 [2024-10-09 03:07:24.761344] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:42.440 03:07:25 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:42.440 03:07:25 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:42.440 03:07:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:42.440 03:07:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:42.440 03:07:25 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.440 03:07:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:42.440 { 00:04:42.440 "filename": "/tmp/spdk_mem_dump.txt" 00:04:42.440 } 00:04:42.440 03:07:25 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.440 03:07:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:42.440 DPDK memory size 860.000000 MiB in 1 heap(s) 00:04:42.440 1 heaps totaling size 860.000000 MiB 00:04:42.440 size: 860.000000 MiB heap id: 0 00:04:42.440 end heaps---------- 00:04:42.440 9 mempools totaling size 642.649841 MiB 00:04:42.440 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:42.440 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:42.440 size: 92.545471 MiB name: bdev_io_57866 00:04:42.440 size: 51.011292 MiB name: evtpool_57866 00:04:42.440 size: 50.003479 MiB name: msgpool_57866 00:04:42.440 size: 36.509338 MiB name: fsdev_io_57866 00:04:42.440 size: 21.763794 MiB name: PDU_Pool 00:04:42.440 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:42.440 size: 0.026123 MiB name: Session_Pool 00:04:42.440 end mempools------- 00:04:42.440 6 memzones totaling size 4.142822 MiB 00:04:42.440 size: 1.000366 MiB name: RG_ring_0_57866 00:04:42.440 size: 1.000366 MiB name: RG_ring_1_57866 00:04:42.440 size: 1.000366 MiB name: RG_ring_4_57866 00:04:42.440 size: 1.000366 MiB name: RG_ring_5_57866 00:04:42.440 size: 0.125366 MiB name: RG_ring_2_57866 00:04:42.440 size: 0.015991 MiB name: RG_ring_3_57866 00:04:42.440 end memzones------- 00:04:42.440 03:07:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:42.440 heap id: 0 total size: 860.000000 MiB number of busy elements: 306 number of free elements: 16 00:04:42.440 list of free elements. size: 13.936707 MiB 00:04:42.440 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:42.440 element at address: 0x200000800000 with size: 1.996948 MiB 00:04:42.440 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:04:42.440 element at address: 0x20001be00000 with size: 0.999878 MiB 00:04:42.440 element at address: 0x200034a00000 with size: 0.994446 MiB 00:04:42.440 element at address: 0x200009600000 with size: 0.959839 MiB 00:04:42.440 element at address: 0x200015e00000 with size: 0.954285 MiB 00:04:42.440 element at address: 0x20001c000000 with size: 0.936584 MiB 00:04:42.440 element at address: 0x200000200000 with size: 0.834839 MiB 00:04:42.440 element at address: 0x20001d800000 with size: 0.568054 MiB 00:04:42.440 element at address: 0x20000d800000 with size: 0.489807 MiB 00:04:42.440 element at address: 0x200003e00000 with size: 0.487732 MiB 00:04:42.440 element at address: 0x20001c200000 with size: 0.485657 MiB 00:04:42.440 element at address: 0x200007000000 with size: 0.480286 MiB 00:04:42.440 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:04:42.440 element at address: 0x200003a00000 with size: 0.353210 MiB 00:04:42.440 list of standard malloc elements. size: 199.266602 MiB 00:04:42.440 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:04:42.440 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:04:42.440 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:04:42.440 element at address: 0x20001befff80 with size: 1.000122 MiB 00:04:42.440 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:04:42.440 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:42.440 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:04:42.440 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:42.440 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:04:42.440 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:42.440 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:42.440 element at address: 0x200003a5a6c0 with size: 0.000183 MiB 00:04:42.440 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:42.440 element at address: 0x200003a5eb80 with size: 0.000183 MiB 00:04:42.440 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:04:42.440 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:04:42.440 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:04:42.440 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:04:42.440 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:04:42.440 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:04:42.440 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:04:42.440 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:04:42.440 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:04:42.440 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:04:42.440 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:04:42.440 element at address: 0x200003a7f680 with size: 0.000183 MiB 00:04:42.440 element at address: 0x200003aff940 with size: 0.000183 MiB 00:04:42.440 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:42.440 element at address: 0x200003e7cdc0 with size: 0.000183 MiB 00:04:42.440 element at address: 0x200003e7ce80 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7cf40 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003eff000 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20000707af40 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20000707b000 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20000707b180 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20000707b240 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20000707b300 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20000707b480 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20000707b540 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20000707b600 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:04:42.441 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:04:42.441 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d8916c0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d891780 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d891840 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d891900 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d892080 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d892140 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d892200 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d892380 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d892440 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d892500 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d892680 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d892740 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d892800 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d892980 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d893040 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d893100 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d893280 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d893340 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d893400 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d893580 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d893640 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d893700 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d893880 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d893940 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d894000 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d894180 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d894240 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d894300 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d894480 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d894540 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d894600 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d894780 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d894840 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d894900 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d895080 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d895140 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d895200 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d895380 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20001d895440 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:04:42.441 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:04:42.442 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:04:42.442 list of memzone associated elements. size: 646.796692 MiB 00:04:42.442 element at address: 0x20001d895500 with size: 211.416748 MiB 00:04:42.442 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:42.442 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:04:42.442 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:42.442 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:04:42.442 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57866_0 00:04:42.442 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:42.442 associated memzone info: size: 48.002930 MiB name: MP_evtpool_57866_0 00:04:42.442 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:42.442 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57866_0 00:04:42.442 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:04:42.442 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57866_0 00:04:42.442 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:04:42.442 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:42.442 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:04:42.442 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:42.442 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:42.442 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_57866 00:04:42.442 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:42.442 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57866 00:04:42.442 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:42.442 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57866 00:04:42.442 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:04:42.442 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:42.442 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:04:42.442 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:42.442 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:04:42.442 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:42.442 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:04:42.442 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:42.442 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:42.442 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57866 00:04:42.442 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:42.442 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57866 00:04:42.442 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:04:42.442 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57866 00:04:42.442 element at address: 0x200034afe940 with size: 1.000488 MiB 00:04:42.442 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57866 00:04:42.442 element at address: 0x200003a7f740 with size: 0.500488 MiB 00:04:42.442 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57866 00:04:42.442 element at address: 0x200003e7ee00 with size: 0.500488 MiB 00:04:42.442 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57866 00:04:42.442 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:04:42.442 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:42.442 element at address: 0x20000707b780 with size: 0.500488 MiB 00:04:42.442 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:42.442 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:04:42.442 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:42.442 element at address: 0x200003a5ec40 with size: 0.125488 MiB 00:04:42.442 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57866 00:04:42.442 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:04:42.442 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:42.442 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:04:42.442 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:42.442 element at address: 0x200003a5a980 with size: 0.016113 MiB 00:04:42.442 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57866 00:04:42.442 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:04:42.442 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:42.442 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:04:42.442 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57866 00:04:42.442 element at address: 0x200003affa00 with size: 0.000305 MiB 00:04:42.442 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57866 00:04:42.442 element at address: 0x200003a5a780 with size: 0.000305 MiB 00:04:42.442 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57866 00:04:42.442 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:04:42.442 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:42.442 03:07:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:42.442 03:07:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57866 00:04:42.442 03:07:25 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 57866 ']' 00:04:42.442 03:07:25 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 57866 00:04:42.442 03:07:25 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:42.442 03:07:25 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:42.442 03:07:25 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57866 00:04:42.442 03:07:25 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:42.442 killing process with pid 57866 00:04:42.442 03:07:25 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:42.442 03:07:25 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57866' 00:04:42.442 03:07:25 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 57866 00:04:42.442 03:07:25 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 57866 00:04:43.011 00:04:43.011 real 0m1.828s 00:04:43.011 user 0m1.965s 00:04:43.011 sys 0m0.436s 00:04:43.011 03:07:26 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.011 ************************************ 00:04:43.011 END TEST dpdk_mem_utility 00:04:43.011 ************************************ 00:04:43.011 03:07:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:43.011 03:07:26 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:43.011 03:07:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.011 03:07:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.011 03:07:26 -- common/autotest_common.sh@10 -- # set +x 00:04:43.011 ************************************ 00:04:43.011 START TEST event 00:04:43.011 ************************************ 00:04:43.011 03:07:26 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:43.011 * Looking for test storage... 00:04:43.011 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:43.011 03:07:26 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:43.011 03:07:26 event -- common/autotest_common.sh@1681 -- # lcov --version 00:04:43.011 03:07:26 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:43.011 03:07:26 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:43.011 03:07:26 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.011 03:07:26 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.011 03:07:26 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.011 03:07:26 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.011 03:07:26 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.011 03:07:26 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.011 03:07:26 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.011 03:07:26 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.011 03:07:26 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.011 03:07:26 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.011 03:07:26 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.011 03:07:26 event -- scripts/common.sh@344 -- # case "$op" in 00:04:43.011 03:07:26 event -- scripts/common.sh@345 -- # : 1 00:04:43.011 03:07:26 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.011 03:07:26 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.011 03:07:26 event -- scripts/common.sh@365 -- # decimal 1 00:04:43.011 03:07:26 event -- scripts/common.sh@353 -- # local d=1 00:04:43.011 03:07:26 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.011 03:07:26 event -- scripts/common.sh@355 -- # echo 1 00:04:43.011 03:07:26 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.011 03:07:26 event -- scripts/common.sh@366 -- # decimal 2 00:04:43.011 03:07:26 event -- scripts/common.sh@353 -- # local d=2 00:04:43.011 03:07:26 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.011 03:07:26 event -- scripts/common.sh@355 -- # echo 2 00:04:43.011 03:07:26 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.011 03:07:26 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.011 03:07:26 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.011 03:07:26 event -- scripts/common.sh@368 -- # return 0 00:04:43.011 03:07:26 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.011 03:07:26 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:43.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.011 --rc genhtml_branch_coverage=1 00:04:43.011 --rc genhtml_function_coverage=1 00:04:43.011 --rc genhtml_legend=1 00:04:43.011 --rc geninfo_all_blocks=1 00:04:43.011 --rc geninfo_unexecuted_blocks=1 00:04:43.011 00:04:43.011 ' 00:04:43.011 03:07:26 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:43.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.011 --rc genhtml_branch_coverage=1 00:04:43.011 --rc genhtml_function_coverage=1 00:04:43.011 --rc genhtml_legend=1 00:04:43.011 --rc geninfo_all_blocks=1 00:04:43.011 --rc geninfo_unexecuted_blocks=1 00:04:43.011 00:04:43.011 ' 00:04:43.011 03:07:26 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:43.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.011 --rc genhtml_branch_coverage=1 00:04:43.011 --rc genhtml_function_coverage=1 00:04:43.011 --rc genhtml_legend=1 00:04:43.011 --rc geninfo_all_blocks=1 00:04:43.011 --rc geninfo_unexecuted_blocks=1 00:04:43.011 00:04:43.011 ' 00:04:43.011 03:07:26 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:43.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.011 --rc genhtml_branch_coverage=1 00:04:43.011 --rc genhtml_function_coverage=1 00:04:43.011 --rc genhtml_legend=1 00:04:43.011 --rc geninfo_all_blocks=1 00:04:43.011 --rc geninfo_unexecuted_blocks=1 00:04:43.011 00:04:43.011 ' 00:04:43.011 03:07:26 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:43.011 03:07:26 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:43.011 03:07:26 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:43.011 03:07:26 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:43.011 03:07:26 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.011 03:07:26 event -- common/autotest_common.sh@10 -- # set +x 00:04:43.011 ************************************ 00:04:43.011 START TEST event_perf 00:04:43.011 ************************************ 00:04:43.011 03:07:26 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:43.270 Running I/O for 1 seconds...[2024-10-09 03:07:26.316683] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:04:43.270 [2024-10-09 03:07:26.316782] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57951 ] 00:04:43.270 [2024-10-09 03:07:26.451013] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:43.270 [2024-10-09 03:07:26.546361] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.270 [2024-10-09 03:07:26.546505] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:04:43.270 Running I/O for 1 seconds...[2024-10-09 03:07:26.547270] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:04:43.270 [2024-10-09 03:07:26.547280] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.646 00:04:44.646 lcore 0: 141971 00:04:44.646 lcore 1: 141972 00:04:44.646 lcore 2: 141970 00:04:44.646 lcore 3: 141972 00:04:44.646 done. 00:04:44.646 00:04:44.646 real 0m1.351s 00:04:44.646 user 0m4.169s 00:04:44.646 sys 0m0.060s 00:04:44.646 03:07:27 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.646 03:07:27 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:44.646 ************************************ 00:04:44.646 END TEST event_perf 00:04:44.646 ************************************ 00:04:44.646 03:07:27 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:44.646 03:07:27 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:44.646 03:07:27 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.646 03:07:27 event -- common/autotest_common.sh@10 -- # set +x 00:04:44.646 ************************************ 00:04:44.646 START TEST event_reactor 00:04:44.646 ************************************ 00:04:44.646 03:07:27 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:44.646 [2024-10-09 03:07:27.720076] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:04:44.646 [2024-10-09 03:07:27.720219] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57989 ] 00:04:44.646 [2024-10-09 03:07:27.854413] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.904 [2024-10-09 03:07:28.001449] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.839 test_start 00:04:45.839 oneshot 00:04:45.839 tick 100 00:04:45.839 tick 100 00:04:45.839 tick 250 00:04:45.839 tick 100 00:04:45.839 tick 100 00:04:45.839 tick 250 00:04:45.839 tick 100 00:04:45.839 tick 500 00:04:45.839 tick 100 00:04:45.839 tick 100 00:04:45.839 tick 250 00:04:45.839 tick 100 00:04:45.839 tick 100 00:04:45.840 test_end 00:04:45.840 00:04:45.840 real 0m1.406s 00:04:45.840 user 0m1.230s 00:04:45.840 sys 0m0.069s 00:04:45.840 03:07:29 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.840 ************************************ 00:04:45.840 END TEST event_reactor 00:04:45.840 ************************************ 00:04:45.840 03:07:29 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:46.099 03:07:29 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:46.099 03:07:29 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:46.099 03:07:29 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.099 03:07:29 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.099 ************************************ 00:04:46.099 START TEST event_reactor_perf 00:04:46.099 ************************************ 00:04:46.099 03:07:29 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:46.099 [2024-10-09 03:07:29.179837] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:04:46.099 [2024-10-09 03:07:29.179947] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58025 ] 00:04:46.099 [2024-10-09 03:07:29.318096] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.357 [2024-10-09 03:07:29.457553] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.294 test_start 00:04:47.294 test_end 00:04:47.294 Performance: 421046 events per second 00:04:47.294 00:04:47.294 real 0m1.413s 00:04:47.294 user 0m1.241s 00:04:47.294 sys 0m0.067s 00:04:47.294 03:07:30 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.294 ************************************ 00:04:47.294 END TEST event_reactor_perf 00:04:47.294 ************************************ 00:04:47.294 03:07:30 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:47.553 03:07:30 event -- event/event.sh@49 -- # uname -s 00:04:47.553 03:07:30 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:47.553 03:07:30 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:47.553 03:07:30 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:47.553 03:07:30 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.553 03:07:30 event -- common/autotest_common.sh@10 -- # set +x 00:04:47.553 ************************************ 00:04:47.553 START TEST event_scheduler 00:04:47.553 ************************************ 00:04:47.553 03:07:30 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:47.553 * Looking for test storage... 00:04:47.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:47.553 03:07:30 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:47.553 03:07:30 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:04:47.553 03:07:30 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:47.553 03:07:30 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:47.553 03:07:30 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.553 03:07:30 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.553 03:07:30 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.553 03:07:30 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.553 03:07:30 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.553 03:07:30 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.553 03:07:30 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.553 03:07:30 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.553 03:07:30 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.553 03:07:30 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.553 03:07:30 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.553 03:07:30 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:47.553 03:07:30 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:47.553 03:07:30 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.553 03:07:30 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.553 03:07:30 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:47.553 03:07:30 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:47.553 03:07:30 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.553 03:07:30 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:47.553 03:07:30 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.553 03:07:30 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:47.553 03:07:30 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:47.553 03:07:30 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.553 03:07:30 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:47.553 03:07:30 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.553 03:07:30 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.553 03:07:30 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.553 03:07:30 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:47.553 03:07:30 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.553 03:07:30 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:47.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.553 --rc genhtml_branch_coverage=1 00:04:47.553 --rc genhtml_function_coverage=1 00:04:47.553 --rc genhtml_legend=1 00:04:47.553 --rc geninfo_all_blocks=1 00:04:47.553 --rc geninfo_unexecuted_blocks=1 00:04:47.553 00:04:47.553 ' 00:04:47.553 03:07:30 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:47.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.553 --rc genhtml_branch_coverage=1 00:04:47.553 --rc genhtml_function_coverage=1 00:04:47.553 --rc genhtml_legend=1 00:04:47.553 --rc geninfo_all_blocks=1 00:04:47.553 --rc geninfo_unexecuted_blocks=1 00:04:47.553 00:04:47.553 ' 00:04:47.553 03:07:30 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:47.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.553 --rc genhtml_branch_coverage=1 00:04:47.553 --rc genhtml_function_coverage=1 00:04:47.553 --rc genhtml_legend=1 00:04:47.553 --rc geninfo_all_blocks=1 00:04:47.553 --rc geninfo_unexecuted_blocks=1 00:04:47.553 00:04:47.553 ' 00:04:47.553 03:07:30 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:47.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.553 --rc genhtml_branch_coverage=1 00:04:47.553 --rc genhtml_function_coverage=1 00:04:47.553 --rc genhtml_legend=1 00:04:47.553 --rc geninfo_all_blocks=1 00:04:47.553 --rc geninfo_unexecuted_blocks=1 00:04:47.553 00:04:47.553 ' 00:04:47.553 03:07:30 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:47.553 03:07:30 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58093 00:04:47.553 03:07:30 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.553 03:07:30 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:47.553 03:07:30 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58093 00:04:47.553 03:07:30 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58093 ']' 00:04:47.553 03:07:30 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.553 03:07:30 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:47.553 03:07:30 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.553 03:07:30 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:47.553 03:07:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:47.553 [2024-10-09 03:07:30.844200] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:04:47.553 [2024-10-09 03:07:30.844295] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58093 ] 00:04:47.812 [2024-10-09 03:07:30.982329] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:47.812 [2024-10-09 03:07:31.105496] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.812 [2024-10-09 03:07:31.105646] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.812 [2024-10-09 03:07:31.105769] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:04:47.812 [2024-10-09 03:07:31.105779] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:04:48.747 03:07:31 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:48.747 03:07:31 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:04:48.747 03:07:31 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:48.747 03:07:31 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.747 03:07:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.747 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:48.747 POWER: Cannot set governor of lcore 0 to userspace 00:04:48.747 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:48.747 POWER: Cannot set governor of lcore 0 to performance 00:04:48.747 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:48.747 POWER: Cannot set governor of lcore 0 to userspace 00:04:48.747 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:48.747 POWER: Cannot set governor of lcore 0 to userspace 00:04:48.747 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:48.747 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:48.747 POWER: Unable to set Power Management Environment for lcore 0 00:04:48.748 [2024-10-09 03:07:31.891737] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:48.748 [2024-10-09 03:07:31.891751] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:48.748 [2024-10-09 03:07:31.891759] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:48.748 [2024-10-09 03:07:31.891774] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:48.748 [2024-10-09 03:07:31.891781] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:48.748 [2024-10-09 03:07:31.891788] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:48.748 03:07:31 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.748 03:07:31 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:48.748 03:07:31 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.748 03:07:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.748 [2024-10-09 03:07:31.971110] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:48.748 [2024-10-09 03:07:32.016149] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:48.748 03:07:32 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.748 03:07:32 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:48.748 03:07:32 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:48.748 03:07:32 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:48.748 03:07:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.748 ************************************ 00:04:48.748 START TEST scheduler_create_thread 00:04:48.748 ************************************ 00:04:48.748 03:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:04:48.748 03:07:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:48.748 03:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.748 03:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.748 2 00:04:48.748 03:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.748 03:07:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:48.748 03:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.748 03:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.006 3 00:04:49.006 03:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.006 03:07:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:49.006 03:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.006 03:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.006 4 00:04:49.007 03:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.007 03:07:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:49.007 03:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.007 03:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.007 5 00:04:49.007 03:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.007 03:07:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:49.007 03:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.007 03:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.007 6 00:04:49.007 03:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.007 03:07:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:49.007 03:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.007 03:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.007 7 00:04:49.007 03:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.007 03:07:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:49.007 03:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.007 03:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.007 8 00:04:49.007 03:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.007 03:07:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:49.007 03:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.007 03:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.007 9 00:04:49.007 03:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.007 03:07:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:49.007 03:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.007 03:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.007 10 00:04:49.007 03:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.007 03:07:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:49.007 03:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.007 03:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.007 03:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.007 03:07:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:49.007 03:07:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:49.007 03:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.007 03:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.942 03:07:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.942 03:07:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:49.942 03:07:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.942 03:07:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.318 03:07:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.318 03:07:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:51.318 03:07:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:51.318 03:07:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.318 03:07:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.254 03:07:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.254 00:04:52.254 real 0m3.371s 00:04:52.254 user 0m0.025s 00:04:52.254 sys 0m0.004s 00:04:52.254 03:07:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.254 03:07:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.254 ************************************ 00:04:52.254 END TEST scheduler_create_thread 00:04:52.254 ************************************ 00:04:52.254 03:07:35 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:52.254 03:07:35 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58093 00:04:52.254 03:07:35 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58093 ']' 00:04:52.254 03:07:35 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58093 00:04:52.254 03:07:35 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:04:52.254 03:07:35 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:52.254 03:07:35 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58093 00:04:52.254 03:07:35 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:52.254 03:07:35 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:52.254 killing process with pid 58093 00:04:52.254 03:07:35 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58093' 00:04:52.254 03:07:35 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58093 00:04:52.254 03:07:35 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58093 00:04:52.513 [2024-10-09 03:07:35.784653] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:53.081 00:04:53.081 real 0m5.496s 00:04:53.081 user 0m11.365s 00:04:53.081 sys 0m0.445s 00:04:53.081 03:07:36 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.081 03:07:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:53.081 ************************************ 00:04:53.081 END TEST event_scheduler 00:04:53.081 ************************************ 00:04:53.081 03:07:36 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:53.081 03:07:36 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:53.081 03:07:36 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.081 03:07:36 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.081 03:07:36 event -- common/autotest_common.sh@10 -- # set +x 00:04:53.081 ************************************ 00:04:53.081 START TEST app_repeat 00:04:53.081 ************************************ 00:04:53.081 03:07:36 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:04:53.081 03:07:36 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.081 03:07:36 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.081 03:07:36 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:53.081 03:07:36 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.081 03:07:36 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:53.081 03:07:36 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:53.081 03:07:36 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:53.081 03:07:36 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58205 00:04:53.081 03:07:36 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:53.081 03:07:36 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:53.081 Process app_repeat pid: 58205 00:04:53.081 03:07:36 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58205' 00:04:53.081 03:07:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:53.081 spdk_app_start Round 0 00:04:53.081 03:07:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:53.081 03:07:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58205 /var/tmp/spdk-nbd.sock 00:04:53.081 03:07:36 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58205 ']' 00:04:53.081 03:07:36 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:53.081 03:07:36 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:53.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:53.081 03:07:36 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:53.081 03:07:36 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:53.081 03:07:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:53.081 [2024-10-09 03:07:36.215726] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:04:53.081 [2024-10-09 03:07:36.215821] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58205 ] 00:04:53.081 [2024-10-09 03:07:36.347144] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:53.340 [2024-10-09 03:07:36.444987] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.340 [2024-10-09 03:07:36.445001] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.340 [2024-10-09 03:07:36.515197] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:53.340 03:07:36 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:53.340 03:07:36 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:53.340 03:07:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:53.599 Malloc0 00:04:53.599 03:07:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:53.858 Malloc1 00:04:53.858 03:07:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.858 03:07:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.858 03:07:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.858 03:07:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:53.858 03:07:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.858 03:07:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:53.858 03:07:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.858 03:07:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.858 03:07:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.858 03:07:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:53.858 03:07:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.858 03:07:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:53.858 03:07:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:53.858 03:07:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:53.858 03:07:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.858 03:07:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:54.117 /dev/nbd0 00:04:54.117 03:07:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:54.376 03:07:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:54.376 03:07:37 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:54.376 03:07:37 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:54.376 03:07:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:54.376 03:07:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:54.376 03:07:37 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:54.376 03:07:37 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:54.376 03:07:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:54.376 03:07:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:54.376 03:07:37 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.376 1+0 records in 00:04:54.376 1+0 records out 00:04:54.376 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324297 s, 12.6 MB/s 00:04:54.376 03:07:37 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.376 03:07:37 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:54.376 03:07:37 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.376 03:07:37 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:54.376 03:07:37 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:54.376 03:07:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.376 03:07:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.376 03:07:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:54.376 /dev/nbd1 00:04:54.635 03:07:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:54.635 03:07:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:54.635 03:07:37 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:54.635 03:07:37 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:54.635 03:07:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:54.635 03:07:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:54.635 03:07:37 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:54.635 03:07:37 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:54.635 03:07:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:54.635 03:07:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:54.635 03:07:37 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.635 1+0 records in 00:04:54.635 1+0 records out 00:04:54.635 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416207 s, 9.8 MB/s 00:04:54.635 03:07:37 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.635 03:07:37 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:54.635 03:07:37 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.635 03:07:37 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:54.635 03:07:37 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:54.635 03:07:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.635 03:07:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.635 03:07:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.635 03:07:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.635 03:07:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.894 03:07:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:54.894 { 00:04:54.894 "nbd_device": "/dev/nbd0", 00:04:54.894 "bdev_name": "Malloc0" 00:04:54.894 }, 00:04:54.894 { 00:04:54.894 "nbd_device": "/dev/nbd1", 00:04:54.894 "bdev_name": "Malloc1" 00:04:54.894 } 00:04:54.894 ]' 00:04:54.894 03:07:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:54.894 { 00:04:54.894 "nbd_device": "/dev/nbd0", 00:04:54.894 "bdev_name": "Malloc0" 00:04:54.894 }, 00:04:54.894 { 00:04:54.894 "nbd_device": "/dev/nbd1", 00:04:54.894 "bdev_name": "Malloc1" 00:04:54.894 } 00:04:54.894 ]' 00:04:54.894 03:07:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.894 03:07:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:54.894 /dev/nbd1' 00:04:54.894 03:07:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.894 03:07:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:54.894 /dev/nbd1' 00:04:54.894 03:07:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:54.894 03:07:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:54.894 03:07:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:54.894 03:07:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:54.894 03:07:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:54.894 03:07:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.894 03:07:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.894 03:07:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:54.894 03:07:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:54.894 03:07:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:54.894 03:07:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:54.894 256+0 records in 00:04:54.894 256+0 records out 00:04:54.894 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0093043 s, 113 MB/s 00:04:54.894 03:07:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.894 03:07:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:54.894 256+0 records in 00:04:54.894 256+0 records out 00:04:54.894 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246158 s, 42.6 MB/s 00:04:54.894 03:07:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.894 03:07:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:54.894 256+0 records in 00:04:54.894 256+0 records out 00:04:54.894 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.039044 s, 26.9 MB/s 00:04:54.894 03:07:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:54.894 03:07:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.894 03:07:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.894 03:07:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:54.894 03:07:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:54.894 03:07:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:54.894 03:07:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:54.894 03:07:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.894 03:07:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:55.153 03:07:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:55.153 03:07:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:55.153 03:07:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:55.153 03:07:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:55.153 03:07:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.153 03:07:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.153 03:07:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:55.153 03:07:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:55.153 03:07:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:55.153 03:07:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:55.153 03:07:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:55.153 03:07:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:55.153 03:07:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:55.153 03:07:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.153 03:07:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.153 03:07:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:55.412 03:07:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.412 03:07:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.412 03:07:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:55.412 03:07:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:55.671 03:07:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:55.671 03:07:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:55.671 03:07:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:55.671 03:07:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.671 03:07:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.671 03:07:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:55.671 03:07:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.671 03:07:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.671 03:07:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:55.671 03:07:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.671 03:07:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:55.940 03:07:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:55.940 03:07:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:55.940 03:07:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:55.940 03:07:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:55.940 03:07:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.940 03:07:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:55.940 03:07:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:55.940 03:07:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:55.940 03:07:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:55.940 03:07:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:55.940 03:07:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:55.940 03:07:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:55.940 03:07:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:56.211 03:07:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:56.469 [2024-10-09 03:07:39.649453] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:56.469 [2024-10-09 03:07:39.733248] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.469 [2024-10-09 03:07:39.733259] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.728 [2024-10-09 03:07:39.784813] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:56.728 [2024-10-09 03:07:39.784910] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:56.728 [2024-10-09 03:07:39.784923] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:59.261 03:07:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:59.261 spdk_app_start Round 1 00:04:59.261 03:07:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:59.261 03:07:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58205 /var/tmp/spdk-nbd.sock 00:04:59.261 03:07:42 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58205 ']' 00:04:59.261 03:07:42 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:59.261 03:07:42 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:59.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:59.261 03:07:42 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:59.261 03:07:42 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:59.261 03:07:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:59.522 03:07:42 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:59.522 03:07:42 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:59.522 03:07:42 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:59.781 Malloc0 00:04:59.781 03:07:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.041 Malloc1 00:05:00.041 03:07:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.041 03:07:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.041 03:07:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.041 03:07:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:00.041 03:07:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.041 03:07:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:00.041 03:07:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.041 03:07:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.041 03:07:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.041 03:07:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:00.041 03:07:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.041 03:07:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:00.041 03:07:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:00.041 03:07:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:00.041 03:07:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.041 03:07:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:00.299 /dev/nbd0 00:05:00.299 03:07:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:00.299 03:07:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:00.299 03:07:43 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:00.299 03:07:43 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:00.299 03:07:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:00.299 03:07:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:00.299 03:07:43 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:00.299 03:07:43 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:00.299 03:07:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:00.299 03:07:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:00.299 03:07:43 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.299 1+0 records in 00:05:00.299 1+0 records out 00:05:00.299 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191142 s, 21.4 MB/s 00:05:00.299 03:07:43 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:00.299 03:07:43 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:00.299 03:07:43 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:00.299 03:07:43 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:00.299 03:07:43 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:00.299 03:07:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.299 03:07:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.299 03:07:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:00.556 /dev/nbd1 00:05:00.556 03:07:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:00.556 03:07:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:00.556 03:07:43 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:00.556 03:07:43 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:00.556 03:07:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:00.556 03:07:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:00.556 03:07:43 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:00.556 03:07:43 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:00.556 03:07:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:00.556 03:07:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:00.556 03:07:43 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.556 1+0 records in 00:05:00.556 1+0 records out 00:05:00.556 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261883 s, 15.6 MB/s 00:05:00.556 03:07:43 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:00.556 03:07:43 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:00.556 03:07:43 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:00.814 03:07:43 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:00.814 03:07:43 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:00.814 03:07:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.814 03:07:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.815 03:07:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.815 03:07:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.815 03:07:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:01.073 { 00:05:01.073 "nbd_device": "/dev/nbd0", 00:05:01.073 "bdev_name": "Malloc0" 00:05:01.073 }, 00:05:01.073 { 00:05:01.073 "nbd_device": "/dev/nbd1", 00:05:01.073 "bdev_name": "Malloc1" 00:05:01.073 } 00:05:01.073 ]' 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:01.073 { 00:05:01.073 "nbd_device": "/dev/nbd0", 00:05:01.073 "bdev_name": "Malloc0" 00:05:01.073 }, 00:05:01.073 { 00:05:01.073 "nbd_device": "/dev/nbd1", 00:05:01.073 "bdev_name": "Malloc1" 00:05:01.073 } 00:05:01.073 ]' 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:01.073 /dev/nbd1' 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:01.073 /dev/nbd1' 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:01.073 256+0 records in 00:05:01.073 256+0 records out 00:05:01.073 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105829 s, 99.1 MB/s 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:01.073 256+0 records in 00:05:01.073 256+0 records out 00:05:01.073 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0224271 s, 46.8 MB/s 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:01.073 256+0 records in 00:05:01.073 256+0 records out 00:05:01.073 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.039261 s, 26.7 MB/s 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.073 03:07:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:01.332 03:07:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:01.332 03:07:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:01.332 03:07:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:01.332 03:07:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.332 03:07:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.332 03:07:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:01.332 03:07:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.332 03:07:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.332 03:07:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.332 03:07:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:01.901 03:07:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:01.901 03:07:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:01.901 03:07:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:01.901 03:07:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.901 03:07:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.901 03:07:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:01.901 03:07:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.901 03:07:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.901 03:07:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.901 03:07:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.901 03:07:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:02.160 03:07:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:02.160 03:07:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:02.160 03:07:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:02.160 03:07:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:02.160 03:07:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:02.160 03:07:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:02.160 03:07:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:02.160 03:07:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:02.160 03:07:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:02.160 03:07:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:02.160 03:07:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:02.160 03:07:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:02.160 03:07:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:02.419 03:07:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:02.678 [2024-10-09 03:07:45.794834] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.678 [2024-10-09 03:07:45.869070] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.678 [2024-10-09 03:07:45.869073] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.678 [2024-10-09 03:07:45.940377] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:02.678 [2024-10-09 03:07:45.940500] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:02.678 [2024-10-09 03:07:45.940514] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:05.966 spdk_app_start Round 2 00:05:05.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:05.966 03:07:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:05.966 03:07:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:05.966 03:07:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58205 /var/tmp/spdk-nbd.sock 00:05:05.966 03:07:48 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58205 ']' 00:05:05.966 03:07:48 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:05.966 03:07:48 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:05.966 03:07:48 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:05.966 03:07:48 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:05.966 03:07:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:05.966 03:07:48 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:05.966 03:07:48 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:05.966 03:07:48 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.966 Malloc0 00:05:05.966 03:07:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:06.226 Malloc1 00:05:06.226 03:07:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:06.226 03:07:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.226 03:07:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.226 03:07:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:06.226 03:07:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.226 03:07:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:06.226 03:07:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:06.226 03:07:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.226 03:07:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.226 03:07:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:06.226 03:07:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.226 03:07:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:06.226 03:07:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:06.226 03:07:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:06.226 03:07:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.226 03:07:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:06.795 /dev/nbd0 00:05:06.795 03:07:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:06.795 03:07:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:06.795 03:07:49 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:06.795 03:07:49 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:06.795 03:07:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:06.795 03:07:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:06.795 03:07:49 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:06.795 03:07:49 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:06.795 03:07:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:06.795 03:07:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:06.795 03:07:49 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.795 1+0 records in 00:05:06.795 1+0 records out 00:05:06.795 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268616 s, 15.2 MB/s 00:05:06.795 03:07:49 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.795 03:07:49 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:06.795 03:07:49 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.795 03:07:49 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:06.795 03:07:49 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:06.795 03:07:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.795 03:07:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.795 03:07:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:06.795 /dev/nbd1 00:05:06.795 03:07:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:06.795 03:07:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:06.795 03:07:50 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:06.795 03:07:50 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:06.795 03:07:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:06.796 03:07:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:06.796 03:07:50 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:06.796 03:07:50 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:06.796 03:07:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:06.796 03:07:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:06.796 03:07:50 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.796 1+0 records in 00:05:06.796 1+0 records out 00:05:06.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273884 s, 15.0 MB/s 00:05:07.055 03:07:50 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:07.055 03:07:50 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:07.055 03:07:50 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:07.055 03:07:50 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:07.055 03:07:50 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:07.055 03:07:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:07.055 03:07:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.055 03:07:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.055 03:07:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.055 03:07:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:07.322 { 00:05:07.322 "nbd_device": "/dev/nbd0", 00:05:07.322 "bdev_name": "Malloc0" 00:05:07.322 }, 00:05:07.322 { 00:05:07.322 "nbd_device": "/dev/nbd1", 00:05:07.322 "bdev_name": "Malloc1" 00:05:07.322 } 00:05:07.322 ]' 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:07.322 { 00:05:07.322 "nbd_device": "/dev/nbd0", 00:05:07.322 "bdev_name": "Malloc0" 00:05:07.322 }, 00:05:07.322 { 00:05:07.322 "nbd_device": "/dev/nbd1", 00:05:07.322 "bdev_name": "Malloc1" 00:05:07.322 } 00:05:07.322 ]' 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:07.322 /dev/nbd1' 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:07.322 /dev/nbd1' 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:07.322 256+0 records in 00:05:07.322 256+0 records out 00:05:07.322 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00537512 s, 195 MB/s 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:07.322 256+0 records in 00:05:07.322 256+0 records out 00:05:07.322 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218724 s, 47.9 MB/s 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:07.322 256+0 records in 00:05:07.322 256+0 records out 00:05:07.322 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247989 s, 42.3 MB/s 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:07.322 03:07:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:07.595 03:07:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:07.595 03:07:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:07.595 03:07:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:07.595 03:07:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.595 03:07:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.595 03:07:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:07.595 03:07:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.595 03:07:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.595 03:07:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:07.595 03:07:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:07.854 03:07:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:07.854 03:07:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:07.854 03:07:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:07.854 03:07:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.854 03:07:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.854 03:07:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:07.854 03:07:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.854 03:07:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.854 03:07:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.854 03:07:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.854 03:07:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:08.114 03:07:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:08.114 03:07:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:08.114 03:07:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:08.373 03:07:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:08.373 03:07:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:08.373 03:07:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:08.373 03:07:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:08.373 03:07:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:08.373 03:07:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:08.373 03:07:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:08.373 03:07:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:08.373 03:07:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:08.373 03:07:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:08.633 03:07:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:08.893 [2024-10-09 03:07:52.044485] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:08.893 [2024-10-09 03:07:52.123988] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.893 [2024-10-09 03:07:52.123992] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.152 [2024-10-09 03:07:52.200209] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:09.152 [2024-10-09 03:07:52.200342] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:09.152 [2024-10-09 03:07:52.200356] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:11.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:11.688 03:07:54 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58205 /var/tmp/spdk-nbd.sock 00:05:11.688 03:07:54 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58205 ']' 00:05:11.688 03:07:54 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:11.688 03:07:54 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:11.688 03:07:54 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:11.688 03:07:54 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:11.688 03:07:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.948 03:07:55 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.948 03:07:55 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:11.948 03:07:55 event.app_repeat -- event/event.sh@39 -- # killprocess 58205 00:05:11.948 03:07:55 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58205 ']' 00:05:11.948 03:07:55 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58205 00:05:11.948 03:07:55 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:11.948 03:07:55 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:11.948 03:07:55 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58205 00:05:11.948 killing process with pid 58205 00:05:11.948 03:07:55 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:11.948 03:07:55 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:11.948 03:07:55 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58205' 00:05:11.948 03:07:55 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58205 00:05:11.948 03:07:55 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58205 00:05:12.207 spdk_app_start is called in Round 0. 00:05:12.207 Shutdown signal received, stop current app iteration 00:05:12.207 Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 reinitialization... 00:05:12.207 spdk_app_start is called in Round 1. 00:05:12.207 Shutdown signal received, stop current app iteration 00:05:12.207 Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 reinitialization... 00:05:12.207 spdk_app_start is called in Round 2. 00:05:12.207 Shutdown signal received, stop current app iteration 00:05:12.207 Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 reinitialization... 00:05:12.207 spdk_app_start is called in Round 3. 00:05:12.207 Shutdown signal received, stop current app iteration 00:05:12.207 03:07:55 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:12.207 03:07:55 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:12.207 00:05:12.207 real 0m19.117s 00:05:12.207 user 0m43.142s 00:05:12.207 sys 0m3.026s 00:05:12.207 03:07:55 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.207 ************************************ 00:05:12.207 END TEST app_repeat 00:05:12.207 ************************************ 00:05:12.207 03:07:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:12.207 03:07:55 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:12.207 03:07:55 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:12.207 03:07:55 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.207 03:07:55 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.207 03:07:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:12.207 ************************************ 00:05:12.207 START TEST cpu_locks 00:05:12.207 ************************************ 00:05:12.207 03:07:55 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:12.207 * Looking for test storage... 00:05:12.207 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:12.207 03:07:55 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:12.207 03:07:55 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:12.207 03:07:55 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:05:12.467 03:07:55 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:12.467 03:07:55 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.467 03:07:55 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.467 03:07:55 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.467 03:07:55 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.467 03:07:55 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.467 03:07:55 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.467 03:07:55 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.467 03:07:55 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.467 03:07:55 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.467 03:07:55 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.467 03:07:55 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.467 03:07:55 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:12.467 03:07:55 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:12.467 03:07:55 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.467 03:07:55 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.467 03:07:55 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:12.467 03:07:55 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:12.467 03:07:55 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.467 03:07:55 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:12.467 03:07:55 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.467 03:07:55 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:12.467 03:07:55 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:12.467 03:07:55 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.467 03:07:55 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:12.467 03:07:55 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.467 03:07:55 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.467 03:07:55 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.467 03:07:55 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:12.467 03:07:55 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.467 03:07:55 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:12.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.467 --rc genhtml_branch_coverage=1 00:05:12.467 --rc genhtml_function_coverage=1 00:05:12.467 --rc genhtml_legend=1 00:05:12.467 --rc geninfo_all_blocks=1 00:05:12.467 --rc geninfo_unexecuted_blocks=1 00:05:12.467 00:05:12.467 ' 00:05:12.467 03:07:55 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:12.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.467 --rc genhtml_branch_coverage=1 00:05:12.467 --rc genhtml_function_coverage=1 00:05:12.467 --rc genhtml_legend=1 00:05:12.467 --rc geninfo_all_blocks=1 00:05:12.467 --rc geninfo_unexecuted_blocks=1 00:05:12.467 00:05:12.467 ' 00:05:12.468 03:07:55 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:12.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.468 --rc genhtml_branch_coverage=1 00:05:12.468 --rc genhtml_function_coverage=1 00:05:12.468 --rc genhtml_legend=1 00:05:12.468 --rc geninfo_all_blocks=1 00:05:12.468 --rc geninfo_unexecuted_blocks=1 00:05:12.468 00:05:12.468 ' 00:05:12.468 03:07:55 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:12.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.468 --rc genhtml_branch_coverage=1 00:05:12.468 --rc genhtml_function_coverage=1 00:05:12.468 --rc genhtml_legend=1 00:05:12.468 --rc geninfo_all_blocks=1 00:05:12.468 --rc geninfo_unexecuted_blocks=1 00:05:12.468 00:05:12.468 ' 00:05:12.468 03:07:55 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:12.468 03:07:55 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:12.468 03:07:55 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:12.468 03:07:55 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:12.468 03:07:55 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.468 03:07:55 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.468 03:07:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.468 ************************************ 00:05:12.468 START TEST default_locks 00:05:12.468 ************************************ 00:05:12.468 03:07:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:12.468 03:07:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58638 00:05:12.468 03:07:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58638 00:05:12.468 03:07:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:12.468 03:07:55 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58638 ']' 00:05:12.468 03:07:55 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.468 03:07:55 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.468 03:07:55 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.468 03:07:55 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.468 03:07:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.468 [2024-10-09 03:07:55.610519] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:12.468 [2024-10-09 03:07:55.610619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58638 ] 00:05:12.468 [2024-10-09 03:07:55.737748] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.727 [2024-10-09 03:07:55.840887] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.727 [2024-10-09 03:07:55.912582] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:13.665 03:07:56 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.665 03:07:56 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:13.665 03:07:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58638 00:05:13.665 03:07:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58638 00:05:13.665 03:07:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:13.665 03:07:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58638 00:05:13.665 03:07:56 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 58638 ']' 00:05:13.665 03:07:56 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 58638 00:05:13.665 03:07:56 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:13.665 03:07:56 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:13.665 03:07:56 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58638 00:05:13.925 killing process with pid 58638 00:05:13.925 03:07:56 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:13.925 03:07:56 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:13.925 03:07:56 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58638' 00:05:13.925 03:07:56 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 58638 00:05:13.925 03:07:56 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 58638 00:05:14.494 03:07:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58638 00:05:14.494 03:07:57 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:14.494 03:07:57 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58638 00:05:14.494 03:07:57 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:14.494 03:07:57 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:14.494 03:07:57 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:14.494 03:07:57 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:14.494 03:07:57 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58638 00:05:14.494 03:07:57 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58638 ']' 00:05:14.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.494 03:07:57 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.494 03:07:57 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:14.494 03:07:57 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.494 ERROR: process (pid: 58638) is no longer running 00:05:14.494 03:07:57 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:14.494 03:07:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.494 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58638) - No such process 00:05:14.494 03:07:57 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:14.494 03:07:57 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:14.494 03:07:57 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:14.494 03:07:57 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:14.494 03:07:57 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:14.494 03:07:57 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:14.494 03:07:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:14.494 03:07:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:14.494 03:07:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:14.494 03:07:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:14.494 00:05:14.494 real 0m2.005s 00:05:14.494 user 0m2.146s 00:05:14.494 sys 0m0.536s 00:05:14.494 03:07:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.494 03:07:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.494 ************************************ 00:05:14.494 END TEST default_locks 00:05:14.494 ************************************ 00:05:14.494 03:07:57 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:14.494 03:07:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.494 03:07:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.494 03:07:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.494 ************************************ 00:05:14.494 START TEST default_locks_via_rpc 00:05:14.494 ************************************ 00:05:14.494 03:07:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:14.494 03:07:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58691 00:05:14.494 03:07:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:14.494 03:07:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58691 00:05:14.494 03:07:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58691 ']' 00:05:14.494 03:07:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.494 03:07:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:14.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.494 03:07:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.494 03:07:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:14.494 03:07:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.494 [2024-10-09 03:07:57.689044] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:14.494 [2024-10-09 03:07:57.689169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58691 ] 00:05:14.754 [2024-10-09 03:07:57.828002] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.754 [2024-10-09 03:07:57.920915] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.754 [2024-10-09 03:07:58.014060] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:15.692 03:07:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:15.692 03:07:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:15.692 03:07:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:15.692 03:07:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.692 03:07:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.692 03:07:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.692 03:07:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:15.692 03:07:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:15.692 03:07:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:15.692 03:07:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:15.692 03:07:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:15.692 03:07:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.692 03:07:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.692 03:07:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.692 03:07:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58691 00:05:15.692 03:07:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58691 00:05:15.692 03:07:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:15.952 03:07:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58691 00:05:15.952 03:07:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 58691 ']' 00:05:15.952 03:07:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 58691 00:05:15.952 03:07:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:15.952 03:07:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:15.952 03:07:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58691 00:05:15.952 killing process with pid 58691 00:05:15.952 03:07:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:15.952 03:07:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:15.952 03:07:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58691' 00:05:15.952 03:07:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 58691 00:05:15.952 03:07:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 58691 00:05:16.520 00:05:16.520 real 0m1.944s 00:05:16.520 user 0m1.992s 00:05:16.520 sys 0m0.687s 00:05:16.520 03:07:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.520 ************************************ 00:05:16.520 END TEST default_locks_via_rpc 00:05:16.520 ************************************ 00:05:16.520 03:07:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.520 03:07:59 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:16.520 03:07:59 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.520 03:07:59 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.520 03:07:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.520 ************************************ 00:05:16.520 START TEST non_locking_app_on_locked_coremask 00:05:16.520 ************************************ 00:05:16.520 03:07:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:16.520 03:07:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58741 00:05:16.520 03:07:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:16.520 03:07:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58741 /var/tmp/spdk.sock 00:05:16.520 03:07:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58741 ']' 00:05:16.520 03:07:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.520 03:07:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:16.520 03:07:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.520 03:07:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:16.520 03:07:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.520 [2024-10-09 03:07:59.691511] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:16.520 [2024-10-09 03:07:59.691616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58741 ] 00:05:16.779 [2024-10-09 03:07:59.828570] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.779 [2024-10-09 03:07:59.968556] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.779 [2024-10-09 03:08:00.038003] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:17.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:17.716 03:08:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:17.716 03:08:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:17.716 03:08:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58757 00:05:17.716 03:08:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:17.716 03:08:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58757 /var/tmp/spdk2.sock 00:05:17.716 03:08:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58757 ']' 00:05:17.716 03:08:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:17.716 03:08:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:17.716 03:08:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:17.716 03:08:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:17.716 03:08:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.716 [2024-10-09 03:08:00.719585] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:17.716 [2024-10-09 03:08:00.719856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58757 ] 00:05:17.716 [2024-10-09 03:08:00.865853] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:17.716 [2024-10-09 03:08:00.865908] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.975 [2024-10-09 03:08:01.102885] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.975 [2024-10-09 03:08:01.252473] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:18.542 03:08:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:18.542 03:08:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:18.542 03:08:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58741 00:05:18.542 03:08:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58741 00:05:18.542 03:08:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:19.479 03:08:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58741 00:05:19.479 03:08:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58741 ']' 00:05:19.479 03:08:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58741 00:05:19.479 03:08:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:19.479 03:08:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:19.479 03:08:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58741 00:05:19.479 killing process with pid 58741 00:05:19.479 03:08:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:19.479 03:08:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:19.479 03:08:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58741' 00:05:19.479 03:08:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58741 00:05:19.479 03:08:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58741 00:05:20.415 03:08:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58757 00:05:20.415 03:08:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58757 ']' 00:05:20.415 03:08:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58757 00:05:20.415 03:08:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:20.415 03:08:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:20.415 03:08:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58757 00:05:20.415 killing process with pid 58757 00:05:20.415 03:08:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:20.415 03:08:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:20.415 03:08:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58757' 00:05:20.415 03:08:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58757 00:05:20.415 03:08:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58757 00:05:20.674 ************************************ 00:05:20.674 END TEST non_locking_app_on_locked_coremask 00:05:20.674 ************************************ 00:05:20.674 00:05:20.674 real 0m4.251s 00:05:20.674 user 0m4.662s 00:05:20.674 sys 0m1.252s 00:05:20.674 03:08:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.674 03:08:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.674 03:08:03 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:20.674 03:08:03 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.674 03:08:03 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.674 03:08:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.674 ************************************ 00:05:20.674 START TEST locking_app_on_unlocked_coremask 00:05:20.674 ************************************ 00:05:20.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.674 03:08:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:20.674 03:08:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58830 00:05:20.674 03:08:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:20.674 03:08:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58830 /var/tmp/spdk.sock 00:05:20.674 03:08:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58830 ']' 00:05:20.674 03:08:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.674 03:08:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:20.674 03:08:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.674 03:08:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:20.674 03:08:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.949 [2024-10-09 03:08:03.989108] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:20.949 [2024-10-09 03:08:03.989206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58830 ] 00:05:20.949 [2024-10-09 03:08:04.121790] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:20.949 [2024-10-09 03:08:04.121827] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.949 [2024-10-09 03:08:04.216359] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.222 [2024-10-09 03:08:04.283672] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:21.222 03:08:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:21.222 03:08:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:21.222 03:08:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58838 00:05:21.222 03:08:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:21.222 03:08:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58838 /var/tmp/spdk2.sock 00:05:21.222 03:08:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58838 ']' 00:05:21.222 03:08:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:21.222 03:08:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:21.222 03:08:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:21.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:21.223 03:08:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:21.223 03:08:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.481 [2024-10-09 03:08:04.550041] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:21.481 [2024-10-09 03:08:04.550335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58838 ] 00:05:21.481 [2024-10-09 03:08:04.691569] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.740 [2024-10-09 03:08:04.879227] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.740 [2024-10-09 03:08:05.010310] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:22.308 03:08:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.308 03:08:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:22.308 03:08:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58838 00:05:22.308 03:08:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58838 00:05:22.308 03:08:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:23.244 03:08:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58830 00:05:23.244 03:08:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58830 ']' 00:05:23.244 03:08:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 58830 00:05:23.244 03:08:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:23.244 03:08:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:23.244 03:08:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58830 00:05:23.244 killing process with pid 58830 00:05:23.244 03:08:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:23.244 03:08:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:23.244 03:08:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58830' 00:05:23.244 03:08:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 58830 00:05:23.244 03:08:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 58830 00:05:23.811 03:08:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58838 00:05:23.811 03:08:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58838 ']' 00:05:23.811 03:08:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 58838 00:05:23.811 03:08:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:23.811 03:08:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:23.811 03:08:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58838 00:05:24.070 killing process with pid 58838 00:05:24.070 03:08:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:24.070 03:08:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:24.070 03:08:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58838' 00:05:24.070 03:08:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 58838 00:05:24.070 03:08:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 58838 00:05:24.329 ************************************ 00:05:24.329 END TEST locking_app_on_unlocked_coremask 00:05:24.329 ************************************ 00:05:24.329 00:05:24.329 real 0m3.584s 00:05:24.329 user 0m3.892s 00:05:24.329 sys 0m1.076s 00:05:24.329 03:08:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.329 03:08:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.329 03:08:07 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:24.329 03:08:07 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:24.329 03:08:07 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.329 03:08:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.329 ************************************ 00:05:24.329 START TEST locking_app_on_locked_coremask 00:05:24.329 ************************************ 00:05:24.329 03:08:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:24.329 03:08:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58905 00:05:24.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.329 03:08:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58905 /var/tmp/spdk.sock 00:05:24.329 03:08:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:24.329 03:08:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58905 ']' 00:05:24.329 03:08:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.329 03:08:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:24.329 03:08:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.329 03:08:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:24.329 03:08:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.588 [2024-10-09 03:08:07.633646] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:24.588 [2024-10-09 03:08:07.633738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58905 ] 00:05:24.588 [2024-10-09 03:08:07.770511] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.588 [2024-10-09 03:08:07.871340] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.847 [2024-10-09 03:08:07.937085] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:25.414 03:08:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:25.414 03:08:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:25.414 03:08:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58921 00:05:25.414 03:08:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:25.415 03:08:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58921 /var/tmp/spdk2.sock 00:05:25.415 03:08:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:25.415 03:08:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58921 /var/tmp/spdk2.sock 00:05:25.415 03:08:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:25.415 03:08:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:25.415 03:08:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:25.415 03:08:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:25.415 03:08:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 58921 /var/tmp/spdk2.sock 00:05:25.415 03:08:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58921 ']' 00:05:25.415 03:08:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.415 03:08:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:25.415 03:08:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.415 03:08:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:25.415 03:08:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.415 [2024-10-09 03:08:08.683138] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:25.415 [2024-10-09 03:08:08.683245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58921 ] 00:05:25.674 [2024-10-09 03:08:08.822058] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58905 has claimed it. 00:05:25.674 [2024-10-09 03:08:08.822161] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:26.242 ERROR: process (pid: 58921) is no longer running 00:05:26.242 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58921) - No such process 00:05:26.242 03:08:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:26.242 03:08:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:26.242 03:08:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:26.242 03:08:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:26.242 03:08:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:26.242 03:08:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:26.242 03:08:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58905 00:05:26.242 03:08:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58905 00:05:26.242 03:08:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:26.810 03:08:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58905 00:05:26.810 03:08:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58905 ']' 00:05:26.810 03:08:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58905 00:05:26.810 03:08:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:26.810 03:08:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:26.810 03:08:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58905 00:05:26.810 killing process with pid 58905 00:05:26.810 03:08:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:26.810 03:08:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:26.810 03:08:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58905' 00:05:26.810 03:08:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58905 00:05:26.810 03:08:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58905 00:05:27.069 ************************************ 00:05:27.069 END TEST locking_app_on_locked_coremask 00:05:27.069 ************************************ 00:05:27.069 00:05:27.069 real 0m2.721s 00:05:27.069 user 0m3.185s 00:05:27.069 sys 0m0.654s 00:05:27.069 03:08:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.069 03:08:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.069 03:08:10 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:27.069 03:08:10 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.069 03:08:10 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.069 03:08:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.069 ************************************ 00:05:27.069 START TEST locking_overlapped_coremask 00:05:27.069 ************************************ 00:05:27.069 03:08:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:27.069 03:08:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58971 00:05:27.069 03:08:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58971 /var/tmp/spdk.sock 00:05:27.069 03:08:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 58971 ']' 00:05:27.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.069 03:08:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.069 03:08:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:27.069 03:08:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:27.069 03:08:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.069 03:08:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:27.069 03:08:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.328 [2024-10-09 03:08:10.408766] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:27.328 [2024-10-09 03:08:10.408867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58971 ] 00:05:27.328 [2024-10-09 03:08:10.544787] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:27.587 [2024-10-09 03:08:10.644692] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.587 [2024-10-09 03:08:10.644815] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:27.587 [2024-10-09 03:08:10.644820] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.587 [2024-10-09 03:08:10.710730] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:28.155 03:08:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:28.155 03:08:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:28.155 03:08:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:28.155 03:08:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58990 00:05:28.155 03:08:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58990 /var/tmp/spdk2.sock 00:05:28.155 03:08:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:28.155 03:08:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58990 /var/tmp/spdk2.sock 00:05:28.155 03:08:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:28.155 03:08:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:28.155 03:08:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:28.155 03:08:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:28.155 03:08:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 58990 /var/tmp/spdk2.sock 00:05:28.155 03:08:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 58990 ']' 00:05:28.155 03:08:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:28.155 03:08:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.155 03:08:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:28.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:28.155 03:08:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.155 03:08:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.414 [2024-10-09 03:08:11.468875] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:28.414 [2024-10-09 03:08:11.469143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58990 ] 00:05:28.414 [2024-10-09 03:08:11.605396] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58971 has claimed it. 00:05:28.414 [2024-10-09 03:08:11.605473] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:28.985 ERROR: process (pid: 58990) is no longer running 00:05:28.985 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58990) - No such process 00:05:28.985 03:08:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:28.985 03:08:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:28.985 03:08:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:28.985 03:08:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:28.985 03:08:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:28.985 03:08:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:28.985 03:08:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:28.985 03:08:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:28.985 03:08:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:28.985 03:08:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:28.985 03:08:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58971 00:05:28.985 03:08:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 58971 ']' 00:05:28.985 03:08:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 58971 00:05:28.985 03:08:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:28.985 03:08:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:28.985 03:08:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58971 00:05:28.985 03:08:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:28.985 03:08:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:28.985 03:08:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58971' 00:05:28.985 killing process with pid 58971 00:05:28.985 03:08:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 58971 00:05:28.985 03:08:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 58971 00:05:29.553 00:05:29.553 real 0m2.492s 00:05:29.553 user 0m7.040s 00:05:29.553 sys 0m0.442s 00:05:29.553 03:08:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.553 03:08:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.553 ************************************ 00:05:29.553 END TEST locking_overlapped_coremask 00:05:29.553 ************************************ 00:05:29.813 03:08:12 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:29.813 03:08:12 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.813 03:08:12 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.813 03:08:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.813 ************************************ 00:05:29.813 START TEST locking_overlapped_coremask_via_rpc 00:05:29.813 ************************************ 00:05:29.813 03:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:29.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.813 03:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59030 00:05:29.813 03:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59030 /var/tmp/spdk.sock 00:05:29.813 03:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59030 ']' 00:05:29.813 03:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.813 03:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:29.813 03:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:29.813 03:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.813 03:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:29.813 03:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.813 [2024-10-09 03:08:12.955419] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:29.813 [2024-10-09 03:08:12.955533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59030 ] 00:05:29.813 [2024-10-09 03:08:13.097472] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:29.813 [2024-10-09 03:08:13.097526] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:30.072 [2024-10-09 03:08:13.227876] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.072 [2024-10-09 03:08:13.227976] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.072 [2024-10-09 03:08:13.227988] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.072 [2024-10-09 03:08:13.321185] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:30.641 03:08:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.641 03:08:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:30.641 03:08:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59048 00:05:30.641 03:08:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:30.641 03:08:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59048 /var/tmp/spdk2.sock 00:05:30.641 03:08:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59048 ']' 00:05:30.641 03:08:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.641 03:08:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.641 03:08:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.641 03:08:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.641 03:08:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.900 [2024-10-09 03:08:13.979370] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:30.900 [2024-10-09 03:08:13.980432] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59048 ] 00:05:30.900 [2024-10-09 03:08:14.127089] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:30.900 [2024-10-09 03:08:14.131166] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:31.159 [2024-10-09 03:08:14.356302] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:31.159 [2024-10-09 03:08:14.356410] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:05:31.159 [2024-10-09 03:08:14.356411] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.418 [2024-10-09 03:08:14.501499] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:31.987 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:31.987 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:31.987 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:31.987 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.987 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.987 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.987 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:31.987 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:31.987 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:31.987 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:31.987 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.987 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:31.987 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.987 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:31.987 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.987 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.987 [2024-10-09 03:08:15.100323] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59030 has claimed it. 00:05:31.987 request: 00:05:31.987 { 00:05:31.987 "method": "framework_enable_cpumask_locks", 00:05:31.987 "req_id": 1 00:05:31.987 } 00:05:31.987 Got JSON-RPC error response 00:05:31.987 response: 00:05:31.987 { 00:05:31.987 "code": -32603, 00:05:31.987 "message": "Failed to claim CPU core: 2" 00:05:31.987 } 00:05:31.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.987 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:31.987 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:31.987 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:31.987 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:31.987 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:31.987 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59030 /var/tmp/spdk.sock 00:05:31.987 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59030 ']' 00:05:31.987 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.987 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.987 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.987 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.987 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.248 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.248 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:32.248 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59048 /var/tmp/spdk2.sock 00:05:32.248 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59048 ']' 00:05:32.248 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.248 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.248 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.248 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.248 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.512 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.512 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:32.512 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:32.512 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:32.512 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:32.512 ************************************ 00:05:32.512 END TEST locking_overlapped_coremask_via_rpc 00:05:32.512 ************************************ 00:05:32.512 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:32.512 00:05:32.512 real 0m2.842s 00:05:32.512 user 0m1.555s 00:05:32.512 sys 0m0.206s 00:05:32.512 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.512 03:08:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.512 03:08:15 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:32.512 03:08:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59030 ]] 00:05:32.512 03:08:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59030 00:05:32.512 03:08:15 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59030 ']' 00:05:32.512 03:08:15 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59030 00:05:32.512 03:08:15 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:32.512 03:08:15 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:32.512 03:08:15 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59030 00:05:32.512 03:08:15 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:32.512 killing process with pid 59030 00:05:32.512 03:08:15 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:32.512 03:08:15 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59030' 00:05:32.512 03:08:15 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59030 00:05:32.512 03:08:15 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59030 00:05:33.451 03:08:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59048 ]] 00:05:33.451 03:08:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59048 00:05:33.451 03:08:16 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59048 ']' 00:05:33.451 03:08:16 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59048 00:05:33.451 03:08:16 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:33.451 03:08:16 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:33.451 03:08:16 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59048 00:05:33.451 killing process with pid 59048 00:05:33.451 03:08:16 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:33.451 03:08:16 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:33.451 03:08:16 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59048' 00:05:33.451 03:08:16 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59048 00:05:33.451 03:08:16 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59048 00:05:33.710 03:08:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:33.710 03:08:17 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:33.710 03:08:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59030 ]] 00:05:33.710 03:08:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59030 00:05:33.710 03:08:17 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59030 ']' 00:05:33.710 Process with pid 59030 is not found 00:05:33.710 03:08:17 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59030 00:05:33.710 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59030) - No such process 00:05:33.710 03:08:17 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59030 is not found' 00:05:33.710 Process with pid 59048 is not found 00:05:33.710 03:08:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59048 ]] 00:05:33.710 03:08:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59048 00:05:33.710 03:08:17 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59048 ']' 00:05:33.710 03:08:17 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59048 00:05:33.710 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59048) - No such process 00:05:33.710 03:08:17 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59048 is not found' 00:05:33.710 03:08:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:33.710 00:05:33.710 real 0m21.647s 00:05:33.710 user 0m39.317s 00:05:33.710 sys 0m5.886s 00:05:33.710 03:08:17 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.969 03:08:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.969 ************************************ 00:05:33.969 END TEST cpu_locks 00:05:33.969 ************************************ 00:05:33.969 ************************************ 00:05:33.969 END TEST event 00:05:33.969 ************************************ 00:05:33.969 00:05:33.969 real 0m50.964s 00:05:33.969 user 1m40.678s 00:05:33.969 sys 0m9.851s 00:05:33.969 03:08:17 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.969 03:08:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.969 03:08:17 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:33.969 03:08:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.969 03:08:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.969 03:08:17 -- common/autotest_common.sh@10 -- # set +x 00:05:33.969 ************************************ 00:05:33.969 START TEST thread 00:05:33.969 ************************************ 00:05:33.969 03:08:17 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:33.969 * Looking for test storage... 00:05:33.969 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:33.969 03:08:17 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:33.969 03:08:17 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:05:33.969 03:08:17 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:34.229 03:08:17 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:34.229 03:08:17 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.229 03:08:17 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.229 03:08:17 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.229 03:08:17 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.229 03:08:17 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.229 03:08:17 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.229 03:08:17 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.229 03:08:17 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.229 03:08:17 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.229 03:08:17 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.229 03:08:17 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.229 03:08:17 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:34.229 03:08:17 thread -- scripts/common.sh@345 -- # : 1 00:05:34.229 03:08:17 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.229 03:08:17 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.229 03:08:17 thread -- scripts/common.sh@365 -- # decimal 1 00:05:34.229 03:08:17 thread -- scripts/common.sh@353 -- # local d=1 00:05:34.229 03:08:17 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.229 03:08:17 thread -- scripts/common.sh@355 -- # echo 1 00:05:34.229 03:08:17 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.229 03:08:17 thread -- scripts/common.sh@366 -- # decimal 2 00:05:34.229 03:08:17 thread -- scripts/common.sh@353 -- # local d=2 00:05:34.229 03:08:17 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.229 03:08:17 thread -- scripts/common.sh@355 -- # echo 2 00:05:34.229 03:08:17 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.229 03:08:17 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.229 03:08:17 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.229 03:08:17 thread -- scripts/common.sh@368 -- # return 0 00:05:34.229 03:08:17 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.229 03:08:17 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:34.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.229 --rc genhtml_branch_coverage=1 00:05:34.229 --rc genhtml_function_coverage=1 00:05:34.229 --rc genhtml_legend=1 00:05:34.229 --rc geninfo_all_blocks=1 00:05:34.229 --rc geninfo_unexecuted_blocks=1 00:05:34.229 00:05:34.229 ' 00:05:34.229 03:08:17 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:34.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.229 --rc genhtml_branch_coverage=1 00:05:34.229 --rc genhtml_function_coverage=1 00:05:34.229 --rc genhtml_legend=1 00:05:34.229 --rc geninfo_all_blocks=1 00:05:34.229 --rc geninfo_unexecuted_blocks=1 00:05:34.229 00:05:34.229 ' 00:05:34.229 03:08:17 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:34.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.229 --rc genhtml_branch_coverage=1 00:05:34.229 --rc genhtml_function_coverage=1 00:05:34.229 --rc genhtml_legend=1 00:05:34.229 --rc geninfo_all_blocks=1 00:05:34.229 --rc geninfo_unexecuted_blocks=1 00:05:34.229 00:05:34.229 ' 00:05:34.229 03:08:17 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:34.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.229 --rc genhtml_branch_coverage=1 00:05:34.229 --rc genhtml_function_coverage=1 00:05:34.229 --rc genhtml_legend=1 00:05:34.229 --rc geninfo_all_blocks=1 00:05:34.229 --rc geninfo_unexecuted_blocks=1 00:05:34.229 00:05:34.229 ' 00:05:34.229 03:08:17 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:34.229 03:08:17 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:34.229 03:08:17 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.229 03:08:17 thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.229 ************************************ 00:05:34.229 START TEST thread_poller_perf 00:05:34.229 ************************************ 00:05:34.229 03:08:17 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:34.229 [2024-10-09 03:08:17.338363] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:34.229 [2024-10-09 03:08:17.338602] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59190 ] 00:05:34.229 [2024-10-09 03:08:17.473451] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.488 [2024-10-09 03:08:17.593346] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.488 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:35.424 [2024-10-09T03:08:18.727Z] ====================================== 00:05:35.424 [2024-10-09T03:08:18.727Z] busy:2213581796 (cyc) 00:05:35.424 [2024-10-09T03:08:18.727Z] total_run_count: 354000 00:05:35.424 [2024-10-09T03:08:18.727Z] tsc_hz: 2200000000 (cyc) 00:05:35.424 [2024-10-09T03:08:18.727Z] ====================================== 00:05:35.424 [2024-10-09T03:08:18.727Z] poller_cost: 6253 (cyc), 2842 (nsec) 00:05:35.424 00:05:35.424 real 0m1.386s 00:05:35.424 user 0m1.215s 00:05:35.424 sys 0m0.063s 00:05:35.424 03:08:18 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.424 03:08:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:35.424 ************************************ 00:05:35.424 END TEST thread_poller_perf 00:05:35.424 ************************************ 00:05:35.684 03:08:18 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:35.684 03:08:18 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:35.684 03:08:18 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.684 03:08:18 thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.684 ************************************ 00:05:35.684 START TEST thread_poller_perf 00:05:35.684 ************************************ 00:05:35.684 03:08:18 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:35.684 [2024-10-09 03:08:18.775514] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:35.684 [2024-10-09 03:08:18.776305] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59225 ] 00:05:35.684 [2024-10-09 03:08:18.918122] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.943 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:35.943 [2024-10-09 03:08:19.007187] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.879 [2024-10-09T03:08:20.182Z] ====================================== 00:05:36.879 [2024-10-09T03:08:20.182Z] busy:2202206712 (cyc) 00:05:36.879 [2024-10-09T03:08:20.182Z] total_run_count: 5000000 00:05:36.879 [2024-10-09T03:08:20.182Z] tsc_hz: 2200000000 (cyc) 00:05:36.879 [2024-10-09T03:08:20.182Z] ====================================== 00:05:36.879 [2024-10-09T03:08:20.182Z] poller_cost: 440 (cyc), 200 (nsec) 00:05:36.879 ************************************ 00:05:36.879 END TEST thread_poller_perf 00:05:36.879 ************************************ 00:05:36.879 00:05:36.879 real 0m1.353s 00:05:36.879 user 0m1.184s 00:05:36.879 sys 0m0.062s 00:05:36.879 03:08:20 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.879 03:08:20 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:36.879 03:08:20 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:36.879 ************************************ 00:05:36.879 END TEST thread 00:05:36.879 ************************************ 00:05:36.879 00:05:36.879 real 0m3.050s 00:05:36.879 user 0m2.565s 00:05:36.879 sys 0m0.265s 00:05:36.879 03:08:20 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.879 03:08:20 thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.138 03:08:20 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:37.138 03:08:20 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:37.138 03:08:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.138 03:08:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.138 03:08:20 -- common/autotest_common.sh@10 -- # set +x 00:05:37.138 ************************************ 00:05:37.138 START TEST app_cmdline 00:05:37.138 ************************************ 00:05:37.138 03:08:20 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:37.138 * Looking for test storage... 00:05:37.138 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:37.138 03:08:20 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:37.138 03:08:20 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:05:37.138 03:08:20 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:37.138 03:08:20 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:37.138 03:08:20 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.138 03:08:20 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.138 03:08:20 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.138 03:08:20 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.138 03:08:20 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.138 03:08:20 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.138 03:08:20 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.138 03:08:20 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.138 03:08:20 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.138 03:08:20 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.138 03:08:20 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.138 03:08:20 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:37.138 03:08:20 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:37.138 03:08:20 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.138 03:08:20 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.138 03:08:20 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:37.138 03:08:20 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:37.138 03:08:20 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.138 03:08:20 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:37.138 03:08:20 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.138 03:08:20 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:37.138 03:08:20 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:37.138 03:08:20 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.138 03:08:20 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:37.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.138 03:08:20 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.138 03:08:20 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.138 03:08:20 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.138 03:08:20 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:37.138 03:08:20 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.138 03:08:20 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:37.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.138 --rc genhtml_branch_coverage=1 00:05:37.138 --rc genhtml_function_coverage=1 00:05:37.138 --rc genhtml_legend=1 00:05:37.138 --rc geninfo_all_blocks=1 00:05:37.138 --rc geninfo_unexecuted_blocks=1 00:05:37.138 00:05:37.138 ' 00:05:37.138 03:08:20 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:37.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.139 --rc genhtml_branch_coverage=1 00:05:37.139 --rc genhtml_function_coverage=1 00:05:37.139 --rc genhtml_legend=1 00:05:37.139 --rc geninfo_all_blocks=1 00:05:37.139 --rc geninfo_unexecuted_blocks=1 00:05:37.139 00:05:37.139 ' 00:05:37.139 03:08:20 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:37.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.139 --rc genhtml_branch_coverage=1 00:05:37.139 --rc genhtml_function_coverage=1 00:05:37.139 --rc genhtml_legend=1 00:05:37.139 --rc geninfo_all_blocks=1 00:05:37.139 --rc geninfo_unexecuted_blocks=1 00:05:37.139 00:05:37.139 ' 00:05:37.139 03:08:20 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:37.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.139 --rc genhtml_branch_coverage=1 00:05:37.139 --rc genhtml_function_coverage=1 00:05:37.139 --rc genhtml_legend=1 00:05:37.139 --rc geninfo_all_blocks=1 00:05:37.139 --rc geninfo_unexecuted_blocks=1 00:05:37.139 00:05:37.139 ' 00:05:37.139 03:08:20 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:37.139 03:08:20 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59308 00:05:37.139 03:08:20 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59308 00:05:37.139 03:08:20 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 59308 ']' 00:05:37.139 03:08:20 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.139 03:08:20 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:37.139 03:08:20 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:37.139 03:08:20 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.139 03:08:20 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:37.139 03:08:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:37.398 [2024-10-09 03:08:20.480397] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:37.398 [2024-10-09 03:08:20.480716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59308 ] 00:05:37.398 [2024-10-09 03:08:20.621874] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.657 [2024-10-09 03:08:20.768943] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.657 [2024-10-09 03:08:20.869644] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:38.224 03:08:21 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:38.224 03:08:21 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:38.224 03:08:21 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:38.483 { 00:05:38.483 "version": "SPDK v25.01-pre git sha1 3c4904078", 00:05:38.483 "fields": { 00:05:38.483 "major": 25, 00:05:38.483 "minor": 1, 00:05:38.483 "patch": 0, 00:05:38.483 "suffix": "-pre", 00:05:38.483 "commit": "3c4904078" 00:05:38.483 } 00:05:38.483 } 00:05:38.483 03:08:21 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:38.483 03:08:21 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:38.483 03:08:21 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:38.483 03:08:21 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:38.483 03:08:21 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:38.483 03:08:21 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:38.483 03:08:21 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:38.483 03:08:21 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.483 03:08:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:38.483 03:08:21 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.483 03:08:21 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:38.483 03:08:21 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:38.483 03:08:21 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:38.483 03:08:21 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:38.483 03:08:21 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:38.483 03:08:21 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:38.483 03:08:21 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:38.483 03:08:21 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:38.483 03:08:21 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:38.483 03:08:21 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:38.741 03:08:21 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:38.741 03:08:21 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:38.741 03:08:21 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:38.741 03:08:21 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:39.000 request: 00:05:39.000 { 00:05:39.000 "method": "env_dpdk_get_mem_stats", 00:05:39.000 "req_id": 1 00:05:39.000 } 00:05:39.000 Got JSON-RPC error response 00:05:39.000 response: 00:05:39.000 { 00:05:39.000 "code": -32601, 00:05:39.000 "message": "Method not found" 00:05:39.000 } 00:05:39.000 03:08:22 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:39.000 03:08:22 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:39.001 03:08:22 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:39.001 03:08:22 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:39.001 03:08:22 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59308 00:05:39.001 03:08:22 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 59308 ']' 00:05:39.001 03:08:22 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 59308 00:05:39.001 03:08:22 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:39.001 03:08:22 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:39.001 03:08:22 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59308 00:05:39.001 killing process with pid 59308 00:05:39.001 03:08:22 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:39.001 03:08:22 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:39.001 03:08:22 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59308' 00:05:39.001 03:08:22 app_cmdline -- common/autotest_common.sh@969 -- # kill 59308 00:05:39.001 03:08:22 app_cmdline -- common/autotest_common.sh@974 -- # wait 59308 00:05:39.569 ************************************ 00:05:39.569 END TEST app_cmdline 00:05:39.569 ************************************ 00:05:39.569 00:05:39.569 real 0m2.465s 00:05:39.569 user 0m2.924s 00:05:39.569 sys 0m0.615s 00:05:39.569 03:08:22 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.569 03:08:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:39.569 03:08:22 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:39.569 03:08:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.569 03:08:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.569 03:08:22 -- common/autotest_common.sh@10 -- # set +x 00:05:39.569 ************************************ 00:05:39.569 START TEST version 00:05:39.569 ************************************ 00:05:39.569 03:08:22 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:39.569 * Looking for test storage... 00:05:39.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:39.569 03:08:22 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:39.569 03:08:22 version -- common/autotest_common.sh@1681 -- # lcov --version 00:05:39.569 03:08:22 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:39.828 03:08:22 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:39.828 03:08:22 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.828 03:08:22 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.828 03:08:22 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.828 03:08:22 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.828 03:08:22 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.828 03:08:22 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.828 03:08:22 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.828 03:08:22 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.828 03:08:22 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.829 03:08:22 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.829 03:08:22 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.829 03:08:22 version -- scripts/common.sh@344 -- # case "$op" in 00:05:39.829 03:08:22 version -- scripts/common.sh@345 -- # : 1 00:05:39.829 03:08:22 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.829 03:08:22 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.829 03:08:22 version -- scripts/common.sh@365 -- # decimal 1 00:05:39.829 03:08:22 version -- scripts/common.sh@353 -- # local d=1 00:05:39.829 03:08:22 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.829 03:08:22 version -- scripts/common.sh@355 -- # echo 1 00:05:39.829 03:08:22 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.829 03:08:22 version -- scripts/common.sh@366 -- # decimal 2 00:05:39.829 03:08:22 version -- scripts/common.sh@353 -- # local d=2 00:05:39.829 03:08:22 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.829 03:08:22 version -- scripts/common.sh@355 -- # echo 2 00:05:39.829 03:08:22 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.829 03:08:22 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.829 03:08:22 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.829 03:08:22 version -- scripts/common.sh@368 -- # return 0 00:05:39.829 03:08:22 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.829 03:08:22 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:39.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.829 --rc genhtml_branch_coverage=1 00:05:39.829 --rc genhtml_function_coverage=1 00:05:39.829 --rc genhtml_legend=1 00:05:39.829 --rc geninfo_all_blocks=1 00:05:39.829 --rc geninfo_unexecuted_blocks=1 00:05:39.829 00:05:39.829 ' 00:05:39.829 03:08:22 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:39.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.829 --rc genhtml_branch_coverage=1 00:05:39.829 --rc genhtml_function_coverage=1 00:05:39.829 --rc genhtml_legend=1 00:05:39.829 --rc geninfo_all_blocks=1 00:05:39.829 --rc geninfo_unexecuted_blocks=1 00:05:39.829 00:05:39.829 ' 00:05:39.829 03:08:22 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:39.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.829 --rc genhtml_branch_coverage=1 00:05:39.829 --rc genhtml_function_coverage=1 00:05:39.829 --rc genhtml_legend=1 00:05:39.829 --rc geninfo_all_blocks=1 00:05:39.829 --rc geninfo_unexecuted_blocks=1 00:05:39.829 00:05:39.829 ' 00:05:39.829 03:08:22 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:39.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.829 --rc genhtml_branch_coverage=1 00:05:39.829 --rc genhtml_function_coverage=1 00:05:39.829 --rc genhtml_legend=1 00:05:39.829 --rc geninfo_all_blocks=1 00:05:39.829 --rc geninfo_unexecuted_blocks=1 00:05:39.829 00:05:39.829 ' 00:05:39.829 03:08:22 version -- app/version.sh@17 -- # get_header_version major 00:05:39.829 03:08:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:39.829 03:08:22 version -- app/version.sh@14 -- # cut -f2 00:05:39.829 03:08:22 version -- app/version.sh@14 -- # tr -d '"' 00:05:39.829 03:08:22 version -- app/version.sh@17 -- # major=25 00:05:39.829 03:08:22 version -- app/version.sh@18 -- # get_header_version minor 00:05:39.829 03:08:22 version -- app/version.sh@14 -- # cut -f2 00:05:39.829 03:08:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:39.829 03:08:22 version -- app/version.sh@14 -- # tr -d '"' 00:05:39.829 03:08:22 version -- app/version.sh@18 -- # minor=1 00:05:39.829 03:08:22 version -- app/version.sh@19 -- # get_header_version patch 00:05:39.829 03:08:22 version -- app/version.sh@14 -- # cut -f2 00:05:39.829 03:08:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:39.829 03:08:22 version -- app/version.sh@14 -- # tr -d '"' 00:05:39.829 03:08:22 version -- app/version.sh@19 -- # patch=0 00:05:39.829 03:08:22 version -- app/version.sh@20 -- # get_header_version suffix 00:05:39.829 03:08:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:39.829 03:08:22 version -- app/version.sh@14 -- # cut -f2 00:05:39.829 03:08:22 version -- app/version.sh@14 -- # tr -d '"' 00:05:39.829 03:08:22 version -- app/version.sh@20 -- # suffix=-pre 00:05:39.829 03:08:22 version -- app/version.sh@22 -- # version=25.1 00:05:39.829 03:08:22 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:39.829 03:08:22 version -- app/version.sh@28 -- # version=25.1rc0 00:05:39.829 03:08:22 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:39.829 03:08:22 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:39.829 03:08:22 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:39.829 03:08:22 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:39.829 ************************************ 00:05:39.829 END TEST version 00:05:39.829 ************************************ 00:05:39.829 00:05:39.829 real 0m0.260s 00:05:39.829 user 0m0.152s 00:05:39.829 sys 0m0.145s 00:05:39.829 03:08:22 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.829 03:08:22 version -- common/autotest_common.sh@10 -- # set +x 00:05:39.829 03:08:23 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:39.829 03:08:23 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:39.829 03:08:23 -- spdk/autotest.sh@194 -- # uname -s 00:05:39.829 03:08:23 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:39.829 03:08:23 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:39.829 03:08:23 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:05:39.829 03:08:23 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:05:39.829 03:08:23 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:39.829 03:08:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.829 03:08:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.829 03:08:23 -- common/autotest_common.sh@10 -- # set +x 00:05:39.829 ************************************ 00:05:39.829 START TEST spdk_dd 00:05:39.829 ************************************ 00:05:39.829 03:08:23 spdk_dd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:39.829 * Looking for test storage... 00:05:40.089 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:40.089 03:08:23 spdk_dd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:40.089 03:08:23 spdk_dd -- common/autotest_common.sh@1681 -- # lcov --version 00:05:40.089 03:08:23 spdk_dd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:40.089 03:08:23 spdk_dd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:40.089 03:08:23 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.089 03:08:23 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.089 03:08:23 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.089 03:08:23 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.089 03:08:23 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.089 03:08:23 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.089 03:08:23 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.089 03:08:23 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.089 03:08:23 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.089 03:08:23 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.089 03:08:23 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.089 03:08:23 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:05:40.089 03:08:23 spdk_dd -- scripts/common.sh@345 -- # : 1 00:05:40.089 03:08:23 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.089 03:08:23 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.089 03:08:23 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:05:40.089 03:08:23 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:05:40.089 03:08:23 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.089 03:08:23 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:05:40.089 03:08:23 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.089 03:08:23 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:05:40.089 03:08:23 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:05:40.089 03:08:23 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.089 03:08:23 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:05:40.089 03:08:23 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.089 03:08:23 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.089 03:08:23 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.089 03:08:23 spdk_dd -- scripts/common.sh@368 -- # return 0 00:05:40.089 03:08:23 spdk_dd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.089 03:08:23 spdk_dd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:40.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.089 --rc genhtml_branch_coverage=1 00:05:40.089 --rc genhtml_function_coverage=1 00:05:40.089 --rc genhtml_legend=1 00:05:40.089 --rc geninfo_all_blocks=1 00:05:40.089 --rc geninfo_unexecuted_blocks=1 00:05:40.089 00:05:40.089 ' 00:05:40.089 03:08:23 spdk_dd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:40.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.089 --rc genhtml_branch_coverage=1 00:05:40.089 --rc genhtml_function_coverage=1 00:05:40.089 --rc genhtml_legend=1 00:05:40.089 --rc geninfo_all_blocks=1 00:05:40.089 --rc geninfo_unexecuted_blocks=1 00:05:40.089 00:05:40.089 ' 00:05:40.089 03:08:23 spdk_dd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:40.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.089 --rc genhtml_branch_coverage=1 00:05:40.089 --rc genhtml_function_coverage=1 00:05:40.089 --rc genhtml_legend=1 00:05:40.089 --rc geninfo_all_blocks=1 00:05:40.089 --rc geninfo_unexecuted_blocks=1 00:05:40.089 00:05:40.089 ' 00:05:40.089 03:08:23 spdk_dd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:40.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.089 --rc genhtml_branch_coverage=1 00:05:40.089 --rc genhtml_function_coverage=1 00:05:40.089 --rc genhtml_legend=1 00:05:40.089 --rc geninfo_all_blocks=1 00:05:40.089 --rc geninfo_unexecuted_blocks=1 00:05:40.089 00:05:40.089 ' 00:05:40.089 03:08:23 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:40.089 03:08:23 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:05:40.089 03:08:23 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:40.089 03:08:23 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:40.089 03:08:23 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:40.089 03:08:23 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.089 03:08:23 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.089 03:08:23 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.089 03:08:23 spdk_dd -- paths/export.sh@5 -- # export PATH 00:05:40.089 03:08:23 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.089 03:08:23 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:40.348 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:40.348 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:40.348 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:40.609 03:08:23 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:05:40.609 03:08:23 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@233 -- # local class 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@235 -- # local progif 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@236 -- # class=01 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@18 -- # local i 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@27 -- # return 0 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@18 -- # local i 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@27 -- # return 0 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:05:40.609 03:08:23 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:40.609 03:08:23 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:05:40.609 03:08:23 spdk_dd -- dd/common.sh@139 -- # local lib 00:05:40.609 03:08:23 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:05:40.609 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.609 03:08:23 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:40.609 03:08:23 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:05:40.609 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:05:40.609 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.609 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:05:40.609 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.609 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:05:40.609 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.609 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:05:40.609 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.609 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:05:40.609 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.14.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.1.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.15.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.2 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.610 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:05:40.611 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.611 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:05:40.611 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.611 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:05:40.611 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.611 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:05:40.611 03:08:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:40.611 03:08:23 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:05:40.611 03:08:23 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:05:40.611 * spdk_dd linked to liburing 00:05:40.611 03:08:23 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:40.611 03:08:23 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=n 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=y 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@75 -- # CONFIG_FC=n 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:05:40.611 03:08:23 spdk_dd -- common/build_config.sh@89 -- # CONFIG_URING=y 00:05:40.611 03:08:23 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:05:40.611 03:08:23 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:05:40.611 03:08:23 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:05:40.611 03:08:23 spdk_dd -- dd/common.sh@153 -- # return 0 00:05:40.611 03:08:23 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:05:40.611 03:08:23 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:40.611 03:08:23 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:40.611 03:08:23 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.611 03:08:23 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:40.611 ************************************ 00:05:40.611 START TEST spdk_dd_basic_rw 00:05:40.611 ************************************ 00:05:40.611 03:08:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:40.611 * Looking for test storage... 00:05:40.611 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:40.611 03:08:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:40.611 03:08:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # lcov --version 00:05:40.611 03:08:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:40.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.871 --rc genhtml_branch_coverage=1 00:05:40.871 --rc genhtml_function_coverage=1 00:05:40.871 --rc genhtml_legend=1 00:05:40.871 --rc geninfo_all_blocks=1 00:05:40.871 --rc geninfo_unexecuted_blocks=1 00:05:40.871 00:05:40.871 ' 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:40.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.871 --rc genhtml_branch_coverage=1 00:05:40.871 --rc genhtml_function_coverage=1 00:05:40.871 --rc genhtml_legend=1 00:05:40.871 --rc geninfo_all_blocks=1 00:05:40.871 --rc geninfo_unexecuted_blocks=1 00:05:40.871 00:05:40.871 ' 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:40.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.871 --rc genhtml_branch_coverage=1 00:05:40.871 --rc genhtml_function_coverage=1 00:05:40.871 --rc genhtml_legend=1 00:05:40.871 --rc geninfo_all_blocks=1 00:05:40.871 --rc geninfo_unexecuted_blocks=1 00:05:40.871 00:05:40.871 ' 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:40.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.871 --rc genhtml_branch_coverage=1 00:05:40.871 --rc genhtml_function_coverage=1 00:05:40.871 --rc genhtml_legend=1 00:05:40.871 --rc geninfo_all_blocks=1 00:05:40.871 --rc geninfo_unexecuted_blocks=1 00:05:40.871 00:05:40.871 ' 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:40.871 03:08:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:40.872 03:08:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:05:40.872 03:08:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:05:40.872 03:08:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:05:40.872 03:08:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:05:41.133 03:08:24 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:05:41.133 03:08:24 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:05:41.134 03:08:24 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:05:41.134 03:08:24 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:05:41.134 03:08:24 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:05:41.134 03:08:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:05:41.134 03:08:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:05:41.134 03:08:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:05:41.134 03:08:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:41.134 03:08:24 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:41.134 03:08:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:41.134 03:08:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:41.134 03:08:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.134 03:08:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:41.134 ************************************ 00:05:41.134 START TEST dd_bs_lt_native_bs 00:05:41.134 ************************************ 00:05:41.134 03:08:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1125 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:41.134 03:08:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:05:41.134 03:08:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:41.134 03:08:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.134 03:08:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.134 03:08:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.134 03:08:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.134 03:08:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.134 03:08:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.134 03:08:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.134 03:08:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:41.134 03:08:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:41.134 { 00:05:41.134 "subsystems": [ 00:05:41.134 { 00:05:41.134 "subsystem": "bdev", 00:05:41.134 "config": [ 00:05:41.134 { 00:05:41.134 "params": { 00:05:41.134 "trtype": "pcie", 00:05:41.134 "traddr": "0000:00:10.0", 00:05:41.134 "name": "Nvme0" 00:05:41.134 }, 00:05:41.134 "method": "bdev_nvme_attach_controller" 00:05:41.134 }, 00:05:41.134 { 00:05:41.134 "method": "bdev_wait_for_examine" 00:05:41.134 } 00:05:41.134 ] 00:05:41.134 } 00:05:41.134 ] 00:05:41.134 } 00:05:41.134 [2024-10-09 03:08:24.254812] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:41.134 [2024-10-09 03:08:24.255112] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59659 ] 00:05:41.134 [2024-10-09 03:08:24.394760] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.393 [2024-10-09 03:08:24.546098] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.393 [2024-10-09 03:08:24.623016] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:41.652 [2024-10-09 03:08:24.746509] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:05:41.652 [2024-10-09 03:08:24.746574] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:41.653 [2024-10-09 03:08:24.916924] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:41.923 03:08:25 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:05:41.923 03:08:25 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:41.923 ************************************ 00:05:41.923 END TEST dd_bs_lt_native_bs 00:05:41.923 ************************************ 00:05:41.923 03:08:25 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:05:41.923 03:08:25 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:05:41.923 03:08:25 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:05:41.923 03:08:25 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:41.923 00:05:41.923 real 0m0.852s 00:05:41.923 user 0m0.593s 00:05:41.923 sys 0m0.212s 00:05:41.923 03:08:25 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.923 03:08:25 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:05:41.923 03:08:25 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:05:41.923 03:08:25 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:41.923 03:08:25 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.923 03:08:25 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:41.923 ************************************ 00:05:41.923 START TEST dd_rw 00:05:41.923 ************************************ 00:05:41.923 03:08:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1125 -- # basic_rw 4096 00:05:41.923 03:08:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:05:41.923 03:08:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:05:41.923 03:08:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:05:41.923 03:08:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:05:41.923 03:08:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:41.923 03:08:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:41.923 03:08:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:41.923 03:08:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:41.923 03:08:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:41.923 03:08:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:41.923 03:08:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:41.923 03:08:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:41.923 03:08:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:41.923 03:08:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:41.923 03:08:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:41.923 03:08:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:41.923 03:08:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:41.923 03:08:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:42.506 03:08:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:05:42.506 03:08:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:42.506 03:08:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:42.506 03:08:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:42.506 [2024-10-09 03:08:25.741010] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:42.506 [2024-10-09 03:08:25.741438] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59696 ] 00:05:42.506 { 00:05:42.506 "subsystems": [ 00:05:42.506 { 00:05:42.506 "subsystem": "bdev", 00:05:42.506 "config": [ 00:05:42.506 { 00:05:42.506 "params": { 00:05:42.506 "trtype": "pcie", 00:05:42.507 "traddr": "0000:00:10.0", 00:05:42.507 "name": "Nvme0" 00:05:42.507 }, 00:05:42.507 "method": "bdev_nvme_attach_controller" 00:05:42.507 }, 00:05:42.507 { 00:05:42.507 "method": "bdev_wait_for_examine" 00:05:42.507 } 00:05:42.507 ] 00:05:42.507 } 00:05:42.507 ] 00:05:42.507 } 00:05:42.767 [2024-10-09 03:08:25.879488] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.767 [2024-10-09 03:08:26.006992] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.025 [2024-10-09 03:08:26.080820] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:43.025  [2024-10-09T03:08:26.586Z] Copying: 60/60 [kB] (average 19 MBps) 00:05:43.283 00:05:43.283 03:08:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:05:43.283 03:08:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:43.283 03:08:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:43.283 03:08:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:43.283 [2024-10-09 03:08:26.554568] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:43.283 [2024-10-09 03:08:26.554991] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59715 ] 00:05:43.283 { 00:05:43.283 "subsystems": [ 00:05:43.283 { 00:05:43.283 "subsystem": "bdev", 00:05:43.283 "config": [ 00:05:43.283 { 00:05:43.283 "params": { 00:05:43.283 "trtype": "pcie", 00:05:43.283 "traddr": "0000:00:10.0", 00:05:43.283 "name": "Nvme0" 00:05:43.283 }, 00:05:43.283 "method": "bdev_nvme_attach_controller" 00:05:43.283 }, 00:05:43.283 { 00:05:43.283 "method": "bdev_wait_for_examine" 00:05:43.283 } 00:05:43.283 ] 00:05:43.283 } 00:05:43.283 ] 00:05:43.283 } 00:05:43.541 [2024-10-09 03:08:26.693998] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.541 [2024-10-09 03:08:26.836834] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.800 [2024-10-09 03:08:26.909631] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:43.800  [2024-10-09T03:08:27.361Z] Copying: 60/60 [kB] (average 19 MBps) 00:05:44.058 00:05:44.058 03:08:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:44.058 03:08:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:44.058 03:08:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:44.058 03:08:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:44.058 03:08:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:44.058 03:08:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:44.058 03:08:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:44.058 03:08:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:44.058 03:08:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:44.058 03:08:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:44.058 03:08:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:44.317 { 00:05:44.317 "subsystems": [ 00:05:44.317 { 00:05:44.317 "subsystem": "bdev", 00:05:44.317 "config": [ 00:05:44.317 { 00:05:44.317 "params": { 00:05:44.317 "trtype": "pcie", 00:05:44.317 "traddr": "0000:00:10.0", 00:05:44.317 "name": "Nvme0" 00:05:44.317 }, 00:05:44.317 "method": "bdev_nvme_attach_controller" 00:05:44.317 }, 00:05:44.317 { 00:05:44.317 "method": "bdev_wait_for_examine" 00:05:44.317 } 00:05:44.317 ] 00:05:44.317 } 00:05:44.317 ] 00:05:44.317 } 00:05:44.317 [2024-10-09 03:08:27.402994] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:44.317 [2024-10-09 03:08:27.403103] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59736 ] 00:05:44.317 [2024-10-09 03:08:27.538945] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.575 [2024-10-09 03:08:27.657769] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.575 [2024-10-09 03:08:27.731127] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:44.575  [2024-10-09T03:08:28.445Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:45.142 00:05:45.142 03:08:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:45.142 03:08:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:45.142 03:08:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:45.142 03:08:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:45.142 03:08:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:45.142 03:08:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:45.142 03:08:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:45.710 03:08:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:05:45.710 03:08:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:45.710 03:08:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:45.710 03:08:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:45.710 { 00:05:45.710 "subsystems": [ 00:05:45.710 { 00:05:45.710 "subsystem": "bdev", 00:05:45.710 "config": [ 00:05:45.710 { 00:05:45.710 "params": { 00:05:45.710 "trtype": "pcie", 00:05:45.710 "traddr": "0000:00:10.0", 00:05:45.710 "name": "Nvme0" 00:05:45.710 }, 00:05:45.710 "method": "bdev_nvme_attach_controller" 00:05:45.710 }, 00:05:45.710 { 00:05:45.710 "method": "bdev_wait_for_examine" 00:05:45.710 } 00:05:45.710 ] 00:05:45.710 } 00:05:45.710 ] 00:05:45.710 } 00:05:45.710 [2024-10-09 03:08:28.823862] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:45.710 [2024-10-09 03:08:28.823989] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59755 ] 00:05:45.710 [2024-10-09 03:08:28.963765] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.969 [2024-10-09 03:08:29.078236] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.969 [2024-10-09 03:08:29.137108] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:45.969  [2024-10-09T03:08:29.531Z] Copying: 60/60 [kB] (average 58 MBps) 00:05:46.228 00:05:46.228 03:08:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:46.228 03:08:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:05:46.228 03:08:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:46.228 03:08:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:46.487 [2024-10-09 03:08:29.553660] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:46.487 [2024-10-09 03:08:29.554006] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59774 ] 00:05:46.487 { 00:05:46.487 "subsystems": [ 00:05:46.487 { 00:05:46.487 "subsystem": "bdev", 00:05:46.487 "config": [ 00:05:46.487 { 00:05:46.487 "params": { 00:05:46.487 "trtype": "pcie", 00:05:46.487 "traddr": "0000:00:10.0", 00:05:46.487 "name": "Nvme0" 00:05:46.487 }, 00:05:46.487 "method": "bdev_nvme_attach_controller" 00:05:46.487 }, 00:05:46.487 { 00:05:46.487 "method": "bdev_wait_for_examine" 00:05:46.487 } 00:05:46.487 ] 00:05:46.487 } 00:05:46.487 ] 00:05:46.487 } 00:05:46.487 [2024-10-09 03:08:29.687528] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.746 [2024-10-09 03:08:29.802575] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.746 [2024-10-09 03:08:29.857991] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:46.746  [2024-10-09T03:08:30.309Z] Copying: 60/60 [kB] (average 58 MBps) 00:05:47.006 00:05:47.006 03:08:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:47.006 03:08:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:47.006 03:08:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:47.006 03:08:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:47.006 03:08:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:47.006 03:08:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:47.006 03:08:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:47.006 03:08:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:47.006 03:08:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:47.006 03:08:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:47.006 03:08:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:47.006 { 00:05:47.006 "subsystems": [ 00:05:47.006 { 00:05:47.006 "subsystem": "bdev", 00:05:47.006 "config": [ 00:05:47.006 { 00:05:47.006 "params": { 00:05:47.006 "trtype": "pcie", 00:05:47.006 "traddr": "0000:00:10.0", 00:05:47.006 "name": "Nvme0" 00:05:47.006 }, 00:05:47.006 "method": "bdev_nvme_attach_controller" 00:05:47.006 }, 00:05:47.006 { 00:05:47.006 "method": "bdev_wait_for_examine" 00:05:47.006 } 00:05:47.006 ] 00:05:47.006 } 00:05:47.006 ] 00:05:47.006 } 00:05:47.006 [2024-10-09 03:08:30.256689] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:47.006 [2024-10-09 03:08:30.256787] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59788 ] 00:05:47.264 [2024-10-09 03:08:30.398786] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.264 [2024-10-09 03:08:30.490123] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.264 [2024-10-09 03:08:30.547580] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:47.524  [2024-10-09T03:08:31.086Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:47.783 00:05:47.783 03:08:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:47.783 03:08:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:47.783 03:08:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:47.783 03:08:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:47.783 03:08:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:47.783 03:08:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:47.783 03:08:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:47.783 03:08:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:48.350 03:08:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:05:48.350 03:08:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:48.350 03:08:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:48.350 03:08:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:48.350 [2024-10-09 03:08:31.509380] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:48.350 [2024-10-09 03:08:31.509480] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59814 ] 00:05:48.350 { 00:05:48.350 "subsystems": [ 00:05:48.350 { 00:05:48.350 "subsystem": "bdev", 00:05:48.350 "config": [ 00:05:48.350 { 00:05:48.350 "params": { 00:05:48.350 "trtype": "pcie", 00:05:48.350 "traddr": "0000:00:10.0", 00:05:48.350 "name": "Nvme0" 00:05:48.350 }, 00:05:48.350 "method": "bdev_nvme_attach_controller" 00:05:48.350 }, 00:05:48.350 { 00:05:48.350 "method": "bdev_wait_for_examine" 00:05:48.350 } 00:05:48.350 ] 00:05:48.350 } 00:05:48.350 ] 00:05:48.350 } 00:05:48.350 [2024-10-09 03:08:31.641605] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.608 [2024-10-09 03:08:31.730472] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.608 [2024-10-09 03:08:31.787696] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:48.608  [2024-10-09T03:08:32.169Z] Copying: 56/56 [kB] (average 27 MBps) 00:05:48.866 00:05:48.866 03:08:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:05:48.866 03:08:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:48.866 03:08:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:48.866 03:08:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:48.866 [2024-10-09 03:08:32.141542] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:48.866 [2024-10-09 03:08:32.141650] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59822 ] 00:05:48.866 { 00:05:48.867 "subsystems": [ 00:05:48.867 { 00:05:48.867 "subsystem": "bdev", 00:05:48.867 "config": [ 00:05:48.867 { 00:05:48.867 "params": { 00:05:48.867 "trtype": "pcie", 00:05:48.867 "traddr": "0000:00:10.0", 00:05:48.867 "name": "Nvme0" 00:05:48.867 }, 00:05:48.867 "method": "bdev_nvme_attach_controller" 00:05:48.867 }, 00:05:48.867 { 00:05:48.867 "method": "bdev_wait_for_examine" 00:05:48.867 } 00:05:48.867 ] 00:05:48.867 } 00:05:48.867 ] 00:05:48.867 } 00:05:49.125 [2024-10-09 03:08:32.269613] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.125 [2024-10-09 03:08:32.369882] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.125 [2024-10-09 03:08:32.426905] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:49.383  [2024-10-09T03:08:32.944Z] Copying: 56/56 [kB] (average 27 MBps) 00:05:49.641 00:05:49.641 03:08:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:49.641 03:08:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:49.641 03:08:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:49.641 03:08:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:49.641 03:08:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:49.641 03:08:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:49.641 03:08:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:49.641 03:08:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:49.641 03:08:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:49.641 03:08:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:49.641 03:08:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:49.641 [2024-10-09 03:08:32.825538] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:49.641 [2024-10-09 03:08:32.825658] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59843 ] 00:05:49.641 { 00:05:49.641 "subsystems": [ 00:05:49.641 { 00:05:49.641 "subsystem": "bdev", 00:05:49.641 "config": [ 00:05:49.641 { 00:05:49.641 "params": { 00:05:49.641 "trtype": "pcie", 00:05:49.641 "traddr": "0000:00:10.0", 00:05:49.641 "name": "Nvme0" 00:05:49.641 }, 00:05:49.641 "method": "bdev_nvme_attach_controller" 00:05:49.641 }, 00:05:49.641 { 00:05:49.641 "method": "bdev_wait_for_examine" 00:05:49.641 } 00:05:49.641 ] 00:05:49.641 } 00:05:49.641 ] 00:05:49.641 } 00:05:49.899 [2024-10-09 03:08:32.963510] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.899 [2024-10-09 03:08:33.049893] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.899 [2024-10-09 03:08:33.105576] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:50.157  [2024-10-09T03:08:33.460Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:50.157 00:05:50.157 03:08:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:50.157 03:08:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:50.157 03:08:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:50.157 03:08:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:50.157 03:08:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:50.157 03:08:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:50.157 03:08:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:50.748 03:08:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:05:50.748 03:08:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:50.748 03:08:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:50.748 03:08:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:51.006 [2024-10-09 03:08:34.067807] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:51.006 [2024-10-09 03:08:34.067934] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59862 ] 00:05:51.006 { 00:05:51.006 "subsystems": [ 00:05:51.006 { 00:05:51.006 "subsystem": "bdev", 00:05:51.006 "config": [ 00:05:51.006 { 00:05:51.006 "params": { 00:05:51.006 "trtype": "pcie", 00:05:51.006 "traddr": "0000:00:10.0", 00:05:51.006 "name": "Nvme0" 00:05:51.006 }, 00:05:51.006 "method": "bdev_nvme_attach_controller" 00:05:51.006 }, 00:05:51.006 { 00:05:51.006 "method": "bdev_wait_for_examine" 00:05:51.006 } 00:05:51.006 ] 00:05:51.006 } 00:05:51.006 ] 00:05:51.006 } 00:05:51.006 [2024-10-09 03:08:34.200673] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.006 [2024-10-09 03:08:34.303687] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.265 [2024-10-09 03:08:34.360853] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:51.265  [2024-10-09T03:08:34.827Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:51.524 00:05:51.524 03:08:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:05:51.524 03:08:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:51.524 03:08:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:51.524 03:08:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:51.524 [2024-10-09 03:08:34.766748] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:51.524 { 00:05:51.524 "subsystems": [ 00:05:51.524 { 00:05:51.524 "subsystem": "bdev", 00:05:51.524 "config": [ 00:05:51.524 { 00:05:51.524 "params": { 00:05:51.524 "trtype": "pcie", 00:05:51.524 "traddr": "0000:00:10.0", 00:05:51.524 "name": "Nvme0" 00:05:51.524 }, 00:05:51.524 "method": "bdev_nvme_attach_controller" 00:05:51.524 }, 00:05:51.524 { 00:05:51.524 "method": "bdev_wait_for_examine" 00:05:51.524 } 00:05:51.524 ] 00:05:51.524 } 00:05:51.524 ] 00:05:51.524 } 00:05:51.524 [2024-10-09 03:08:34.767924] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59881 ] 00:05:51.783 [2024-10-09 03:08:34.905853] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.783 [2024-10-09 03:08:35.005976] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.783 [2024-10-09 03:08:35.059644] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.042  [2024-10-09T03:08:35.604Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:52.301 00:05:52.301 03:08:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:52.301 03:08:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:52.301 03:08:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:52.301 03:08:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:52.301 03:08:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:52.301 03:08:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:52.301 03:08:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:52.301 03:08:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:52.301 03:08:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:52.301 03:08:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:52.301 03:08:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:52.301 [2024-10-09 03:08:35.450402] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:52.301 [2024-10-09 03:08:35.450503] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59891 ] 00:05:52.301 { 00:05:52.301 "subsystems": [ 00:05:52.301 { 00:05:52.301 "subsystem": "bdev", 00:05:52.301 "config": [ 00:05:52.301 { 00:05:52.301 "params": { 00:05:52.301 "trtype": "pcie", 00:05:52.301 "traddr": "0000:00:10.0", 00:05:52.301 "name": "Nvme0" 00:05:52.301 }, 00:05:52.301 "method": "bdev_nvme_attach_controller" 00:05:52.301 }, 00:05:52.301 { 00:05:52.301 "method": "bdev_wait_for_examine" 00:05:52.301 } 00:05:52.301 ] 00:05:52.301 } 00:05:52.301 ] 00:05:52.301 } 00:05:52.301 [2024-10-09 03:08:35.587741] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.560 [2024-10-09 03:08:35.670804] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.560 [2024-10-09 03:08:35.728819] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.560  [2024-10-09T03:08:36.128Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:52.825 00:05:52.825 03:08:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:52.825 03:08:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:52.825 03:08:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:05:52.825 03:08:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:05:52.825 03:08:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:05:52.825 03:08:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:52.825 03:08:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:52.825 03:08:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:53.391 03:08:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:05:53.391 03:08:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:53.391 03:08:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:53.391 03:08:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:53.391 [2024-10-09 03:08:36.579100] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:53.391 [2024-10-09 03:08:36.579214] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59922 ] 00:05:53.391 { 00:05:53.391 "subsystems": [ 00:05:53.391 { 00:05:53.391 "subsystem": "bdev", 00:05:53.391 "config": [ 00:05:53.391 { 00:05:53.391 "params": { 00:05:53.391 "trtype": "pcie", 00:05:53.391 "traddr": "0000:00:10.0", 00:05:53.391 "name": "Nvme0" 00:05:53.391 }, 00:05:53.391 "method": "bdev_nvme_attach_controller" 00:05:53.391 }, 00:05:53.391 { 00:05:53.391 "method": "bdev_wait_for_examine" 00:05:53.391 } 00:05:53.391 ] 00:05:53.391 } 00:05:53.391 ] 00:05:53.391 } 00:05:53.650 [2024-10-09 03:08:36.714743] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.650 [2024-10-09 03:08:36.812717] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.650 [2024-10-09 03:08:36.868091] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:53.909  [2024-10-09T03:08:37.212Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:53.909 00:05:53.909 03:08:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:05:53.909 03:08:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:53.909 03:08:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:53.909 03:08:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:54.167 [2024-10-09 03:08:37.254800] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:54.167 [2024-10-09 03:08:37.254919] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59931 ] 00:05:54.167 { 00:05:54.167 "subsystems": [ 00:05:54.167 { 00:05:54.167 "subsystem": "bdev", 00:05:54.167 "config": [ 00:05:54.167 { 00:05:54.167 "params": { 00:05:54.167 "trtype": "pcie", 00:05:54.167 "traddr": "0000:00:10.0", 00:05:54.167 "name": "Nvme0" 00:05:54.167 }, 00:05:54.167 "method": "bdev_nvme_attach_controller" 00:05:54.167 }, 00:05:54.167 { 00:05:54.167 "method": "bdev_wait_for_examine" 00:05:54.167 } 00:05:54.167 ] 00:05:54.167 } 00:05:54.167 ] 00:05:54.167 } 00:05:54.167 [2024-10-09 03:08:37.388323] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.426 [2024-10-09 03:08:37.495171] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.426 [2024-10-09 03:08:37.552980] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:54.426  [2024-10-09T03:08:37.987Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:54.684 00:05:54.684 03:08:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:54.684 03:08:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:05:54.684 03:08:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:54.684 03:08:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:54.684 03:08:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:05:54.684 03:08:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:54.684 03:08:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:54.684 03:08:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:54.684 03:08:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:54.685 03:08:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:54.685 03:08:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:54.685 { 00:05:54.685 "subsystems": [ 00:05:54.685 { 00:05:54.685 "subsystem": "bdev", 00:05:54.685 "config": [ 00:05:54.685 { 00:05:54.685 "params": { 00:05:54.685 "trtype": "pcie", 00:05:54.685 "traddr": "0000:00:10.0", 00:05:54.685 "name": "Nvme0" 00:05:54.685 }, 00:05:54.685 "method": "bdev_nvme_attach_controller" 00:05:54.685 }, 00:05:54.685 { 00:05:54.685 "method": "bdev_wait_for_examine" 00:05:54.685 } 00:05:54.685 ] 00:05:54.685 } 00:05:54.685 ] 00:05:54.685 } 00:05:54.685 [2024-10-09 03:08:37.959500] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:54.685 [2024-10-09 03:08:37.959620] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59952 ] 00:05:54.943 [2024-10-09 03:08:38.098354] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.943 [2024-10-09 03:08:38.190347] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.201 [2024-10-09 03:08:38.246211] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:55.201  [2024-10-09T03:08:38.763Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:55.460 00:05:55.460 03:08:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:55.460 03:08:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:05:55.460 03:08:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:05:55.460 03:08:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:05:55.460 03:08:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:55.460 03:08:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:55.460 03:08:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:55.757 03:08:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:05:55.757 03:08:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:55.757 03:08:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:55.757 03:08:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:56.015 [2024-10-09 03:08:39.095563] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:56.015 [2024-10-09 03:08:39.095674] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59971 ] 00:05:56.015 { 00:05:56.015 "subsystems": [ 00:05:56.015 { 00:05:56.015 "subsystem": "bdev", 00:05:56.015 "config": [ 00:05:56.015 { 00:05:56.015 "params": { 00:05:56.015 "trtype": "pcie", 00:05:56.015 "traddr": "0000:00:10.0", 00:05:56.015 "name": "Nvme0" 00:05:56.015 }, 00:05:56.015 "method": "bdev_nvme_attach_controller" 00:05:56.015 }, 00:05:56.015 { 00:05:56.015 "method": "bdev_wait_for_examine" 00:05:56.015 } 00:05:56.015 ] 00:05:56.015 } 00:05:56.015 ] 00:05:56.015 } 00:05:56.015 [2024-10-09 03:08:39.235789] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.274 [2024-10-09 03:08:39.333433] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.274 [2024-10-09 03:08:39.390549] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:56.274  [2024-10-09T03:08:39.836Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:56.533 00:05:56.533 03:08:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:05:56.533 03:08:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:56.533 03:08:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:56.533 03:08:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:56.533 [2024-10-09 03:08:39.784630] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:56.533 [2024-10-09 03:08:39.784735] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59991 ] 00:05:56.533 { 00:05:56.533 "subsystems": [ 00:05:56.533 { 00:05:56.533 "subsystem": "bdev", 00:05:56.533 "config": [ 00:05:56.533 { 00:05:56.533 "params": { 00:05:56.533 "trtype": "pcie", 00:05:56.533 "traddr": "0000:00:10.0", 00:05:56.533 "name": "Nvme0" 00:05:56.533 }, 00:05:56.533 "method": "bdev_nvme_attach_controller" 00:05:56.533 }, 00:05:56.533 { 00:05:56.533 "method": "bdev_wait_for_examine" 00:05:56.533 } 00:05:56.533 ] 00:05:56.533 } 00:05:56.533 ] 00:05:56.533 } 00:05:56.791 [2024-10-09 03:08:39.920853] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.791 [2024-10-09 03:08:40.040887] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.050 [2024-10-09 03:08:40.100881] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:57.050  [2024-10-09T03:08:40.611Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:57.308 00:05:57.308 03:08:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:57.308 03:08:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:05:57.308 03:08:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:57.308 03:08:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:57.308 03:08:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:05:57.308 03:08:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:57.308 03:08:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:57.308 03:08:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:57.308 03:08:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:57.308 03:08:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:57.308 03:08:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:57.308 { 00:05:57.308 "subsystems": [ 00:05:57.308 { 00:05:57.308 "subsystem": "bdev", 00:05:57.308 "config": [ 00:05:57.308 { 00:05:57.308 "params": { 00:05:57.308 "trtype": "pcie", 00:05:57.308 "traddr": "0000:00:10.0", 00:05:57.308 "name": "Nvme0" 00:05:57.308 }, 00:05:57.308 "method": "bdev_nvme_attach_controller" 00:05:57.308 }, 00:05:57.308 { 00:05:57.308 "method": "bdev_wait_for_examine" 00:05:57.308 } 00:05:57.308 ] 00:05:57.308 } 00:05:57.308 ] 00:05:57.308 } 00:05:57.308 [2024-10-09 03:08:40.537876] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:57.309 [2024-10-09 03:08:40.538008] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60001 ] 00:05:57.567 [2024-10-09 03:08:40.678486] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.567 [2024-10-09 03:08:40.787718] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.567 [2024-10-09 03:08:40.849756] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:57.826  [2024-10-09T03:08:41.388Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:58.085 00:05:58.085 00:05:58.085 real 0m16.108s 00:05:58.085 user 0m11.803s 00:05:58.085 sys 0m5.979s 00:05:58.085 03:08:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.085 ************************************ 00:05:58.085 END TEST dd_rw 00:05:58.085 ************************************ 00:05:58.085 03:08:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:58.085 03:08:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:05:58.085 03:08:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.085 03:08:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.085 03:08:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:58.085 ************************************ 00:05:58.085 START TEST dd_rw_offset 00:05:58.085 ************************************ 00:05:58.085 03:08:41 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1125 -- # basic_offset 00:05:58.085 03:08:41 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:05:58.085 03:08:41 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:05:58.085 03:08:41 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:05:58.085 03:08:41 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:58.085 03:08:41 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:05:58.085 03:08:41 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=2uhl6tzyh66ev9ss96f4pg1b9g9o5k0ly8khgc2ps2x3p52vc09jpty6bgam4vzdpjdc0md7c2j8tlcy0lhwgq7i4vs6zs3rtjglhbq975cnpshzeepaltwp5wx4v239kixzu4f0wuho4etbkymwghfijqm3hi8sv9ttkzvd7er1cra9cg8b6q7mjduyzj3bcram60fcb4oq02204cesqq3won3p9n1wn9b3k5shr7wixuu08yptvkf8h6zci49f9m36nn0cct9qj02jrzcmxx8v77jlfbagn0pxk7kibobawqm5v2m7reska96bsoj9mu4bhig9jc6t3r9hu7513tg1hyly7j6qd1rb0x3uh599fumtw823k9db0g7muz52tdrhkzs1nbkcd4xydfyjv1f60tf54k1pecw42nywdl34cvnpgsjcx6hkq3lrdm7y9bz3aghguqnqyie8blicrvd4orralacq94ezzn9j0mem2ptaw3996oz2he5mepzpg3tq3j73p3ezyr8191sh58bl8y74os0d2asuxqgz83c1w6oovbyw4jgaepcuukh0f4p786c9v4okstdr6tuue7za17qkvr54dysj789y1jkzs3defef4eu1nb73adcznoo9w01uw171v2nugv4gvn9ffak9yi1l5p1ynsy8xgmz120t4b24mahcv7559m3nc5rcru888jpsq84bbev9h8zhhps22leb33bhihkstzyjepy2fv59252q458hnnx7xr30an1beg920y1a9udohsjs4el5wjyp1fy3ik4e0btit6gr7n7rr4p7ixemo8dcnf34jnajpcyh057ey45t9yqfbg3txk7eqs954d1u5qu5mreqmktb56w02x5t676f2pjmiy76q4lfwf4bkyjygzk232iuvm4o8ejszi2mxa489rwus6lajoov8htmcffd63d5svjepcb5r4v68n7bw7cv462f8h1powbph0iktjyvdbxgu6585s953o5mkqa9v3ce25pbq42a1ggh0to2jcw4mwtf4h6gjjwnyuhxn300csg1uyg971zuh0tuq2hxyprq72cx5xg3m80cxkshwdvob4l224v350mezeslty64jfy9ntivnhvxu6rq66scv38on334a4843ctt5kpak5mtftudnb41k1m4lywhs5wswnzdezgdf46blcdlnp8wn71szeex2aubnduxx48v7r3pujzv1exjxkk9avp812rnkfg7b9xo1rv3g51trul0bs07hi7tyyrgeimncqd1gc7cf0i68nkw92xezm0s4f7gphsjqp7ufau0t3da7nlquvfcf74cseaba7okh1d3h4eawlxmhlawpz2m5i6or6ig3gl69bpuzxoxw7y1zihdd5o88bpbrb8fygygnipi38su8lvbzi141a8m24lgybx2t84jgv3o3gighx4pi2hprvxl08run5dnu9btx8w3l2gwwx6k9jir2o665ji22q9uluk85ve2ehbxx7nwoxawker1hzgsvorblavtqvfpotinn115xdqqps2rukvc6ttsjqr8nv8o6x0ga5cnjzb35pgt3xt8uvoahczyrf1jf16usg6n0m0n6pdbyccnq1ajklmzwq2t2y1xwt3czh4a7fuouj7db6w1907l0aa8yajomosahswnnkdsq34qwhuhdwmjahu3w57je0x048x5s60e3xxas7yujmic0one7q98n0gl3wrp1d9v3v8c3sj0cibujd7fq751t2217pgs994wd7kbleepvpsywskl5nb23pf1p9dbd8o0thlu0ll6v75hq1ulkddonx4foir1ruoipob5nfpcrcn46x8aofi5v7033shjwb2fmx8isuo082rl4apa9y1xll6fongh15mo9xqghbgtzpagjt8873rcqx7n3rz5nsorzp58aql5lgdk15dv71geof9nwbroxh41jajppy0u3624lfbq5rft94frsni1uqjhsj77lef9xxhvmy81o56y8zvc4yof5svacggp5ojeyzl72gczh2pxiwnkyrx0e7kaayw2g1vcvq580mwoe160bh7bzicf6xfw4o36w8ll1y4i5wpe16pwa9cnyjsixo7l1v5gm8vurpgx6zovvk4w56vpi8ck7aqx7msbysoskfxjwzg5qcbut638tobq0lsyvkfdchrpp5k8y1c454gfrepylaigjrctsnwpt216hxj44mi31vsakf4lot9uow2spaz0whcc4ea3yu1m3vhipemeyxa5wh4pnd4j87uegl9q14f1l9has5ekt53iiau47u3zoj4bbfowxbx9knh46dlhp6dkgz4g82gvl6x6hbq9pkiym8v0038u1njdrbfdc01ryrfgfvbqvxra03q7nb0j7zq3ljgc09hau7i8njmikdce61wjwke7ehm34nu1yaumgfsbt0j30h4sumb5mefpjf5c92fjkud8tox18zgutbbzr1bnor2tqfarh5z7c1terzckh7smxq56iwyj25pkc9pk5j3g81n1oas03knxdqjno0xsz3ufvocak1jmq1qlx3datwsrvfinsmapuo28u7yeyw547ab9f8pji9vvlqzp4xay024lrkyi7ibna5cngbjbp2239upfkjsao7sdz0ac59lyuezunm46hbe25wunm31dj5egw6opzxueq3y5zhn8aekt35ri24mcj6wp46978xqijt8yj2selx4lu23ttzjq28yl8bnag0fy9xax3d2nxqgjczevpqckpfnj9ato10fh0uocfrv95x3v6teds3ngd1zuplj7abv94dxyjjhvy5vz1c8q91v5sdgfeubsh0n2u66ev5y42fv6rmvzh07a0vazsfnm9sfdv4qoaj7a6gei76endnbblg4g8ms6wo0pynhiju1bse8ck8juasez1i3oxpl9zu0bnn2gje5yhgjip5mivu7kc6xvwmlvc95f47og34g245oht6md0ps1tvcxgrijscge120j8nz64m4ne4jl6maoewo55w6jq9px4d9o7jzaoy9aap8bxnlgiraki3qbq27eagogivxuyox7zna8ob616yi17yg0ijawp51gtyu8tja3j4uec3vfmrv2awhrgei70zvg9jmjspvhrj8173u2utswmmy2fp9jso8pkibd5wd8ji93i42t3r14lymn8raj1uzt22oppim3mrrox0r397gvppyr0wn95cd4ec16uf5cazxhzw0cznskzq9b5j2sij9dcyc3ono9n7zbuy3gig2i9gol5wljpn50yyiddxazhfht07tltexf9ynbsazi1wz6svvg7omquplreeeqcov88hpk74necntukj2qlshh5mycusyrgyqe25bs75c601av7lw1yzts4rf3nqotyv1lk05orvdv1ul9uo86uhof5q2kqgg1xnlp5aw1cn9zuto1deqawkdm1hlq8l6leja5pmuy2rij3aviddqc0f7f5se3fppp4chivvfv53nlzbyhl57q1hk29mv4hgwti1dwrqk9njuw8dayqfisa6r3tc2efj6sukx0iggvevx7vjbnqp7w75xzd3on2a76kmltjjmpladjpke4qrzjvwyefmnf99njrcynbi4cfgtrf75wktf156mhyz4ovbbk65cd8gsnqi9ee0o3zx9okmtowboqx94blveah5dpsnntkrfx5t0stlhmrl6y4k2vnv6cj13q5q6xbj4wo33sm4247zgbeqjl8zvkzvq6jkna0sppu98404kdpxkg87hjd71fehtx0i1qo38hmmataec2skv2twzk6zgjl15jd1engtecvyhek663dg2yb99hj1bo1n4oywmyi3vz8a1de6vuixg3zxbuvy162ktkm3a9a2qkzyxeex92zg7dwafk1y5x5j8kh8etlizg3erq7nfwrhfgzctmknroejehsah8dohclrvl2o1mdym32c4nue4sinu9oqedh3g1rvzxwrld9ncikzn8zaurm76evtixk5opsweb1xkw28cqwp41jndxsxj78 00:05:58.085 03:08:41 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:05:58.085 03:08:41 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:05:58.085 03:08:41 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:05:58.085 03:08:41 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:58.085 { 00:05:58.085 "subsystems": [ 00:05:58.085 { 00:05:58.085 "subsystem": "bdev", 00:05:58.085 "config": [ 00:05:58.085 { 00:05:58.085 "params": { 00:05:58.085 "trtype": "pcie", 00:05:58.085 "traddr": "0000:00:10.0", 00:05:58.085 "name": "Nvme0" 00:05:58.085 }, 00:05:58.085 "method": "bdev_nvme_attach_controller" 00:05:58.085 }, 00:05:58.085 { 00:05:58.085 "method": "bdev_wait_for_examine" 00:05:58.085 } 00:05:58.085 ] 00:05:58.085 } 00:05:58.085 ] 00:05:58.085 } 00:05:58.085 [2024-10-09 03:08:41.376151] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:58.085 [2024-10-09 03:08:41.376242] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60037 ] 00:05:58.344 [2024-10-09 03:08:41.515264] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.344 [2024-10-09 03:08:41.623766] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.602 [2024-10-09 03:08:41.683361] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:58.602  [2024-10-09T03:08:42.164Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:05:58.861 00:05:58.861 03:08:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:05:58.861 03:08:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:05:58.861 03:08:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:05:58.861 03:08:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:58.861 { 00:05:58.861 "subsystems": [ 00:05:58.861 { 00:05:58.861 "subsystem": "bdev", 00:05:58.861 "config": [ 00:05:58.861 { 00:05:58.861 "params": { 00:05:58.861 "trtype": "pcie", 00:05:58.861 "traddr": "0000:00:10.0", 00:05:58.861 "name": "Nvme0" 00:05:58.861 }, 00:05:58.861 "method": "bdev_nvme_attach_controller" 00:05:58.861 }, 00:05:58.861 { 00:05:58.861 "method": "bdev_wait_for_examine" 00:05:58.861 } 00:05:58.861 ] 00:05:58.861 } 00:05:58.861 ] 00:05:58.861 } 00:05:58.861 [2024-10-09 03:08:42.086705] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:58.861 [2024-10-09 03:08:42.086839] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60056 ] 00:05:59.119 [2024-10-09 03:08:42.226778] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.119 [2024-10-09 03:08:42.320161] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.119 [2024-10-09 03:08:42.377243] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:59.378  [2024-10-09T03:08:42.940Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:05:59.637 00:05:59.637 03:08:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:05:59.638 03:08:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 2uhl6tzyh66ev9ss96f4pg1b9g9o5k0ly8khgc2ps2x3p52vc09jpty6bgam4vzdpjdc0md7c2j8tlcy0lhwgq7i4vs6zs3rtjglhbq975cnpshzeepaltwp5wx4v239kixzu4f0wuho4etbkymwghfijqm3hi8sv9ttkzvd7er1cra9cg8b6q7mjduyzj3bcram60fcb4oq02204cesqq3won3p9n1wn9b3k5shr7wixuu08yptvkf8h6zci49f9m36nn0cct9qj02jrzcmxx8v77jlfbagn0pxk7kibobawqm5v2m7reska96bsoj9mu4bhig9jc6t3r9hu7513tg1hyly7j6qd1rb0x3uh599fumtw823k9db0g7muz52tdrhkzs1nbkcd4xydfyjv1f60tf54k1pecw42nywdl34cvnpgsjcx6hkq3lrdm7y9bz3aghguqnqyie8blicrvd4orralacq94ezzn9j0mem2ptaw3996oz2he5mepzpg3tq3j73p3ezyr8191sh58bl8y74os0d2asuxqgz83c1w6oovbyw4jgaepcuukh0f4p786c9v4okstdr6tuue7za17qkvr54dysj789y1jkzs3defef4eu1nb73adcznoo9w01uw171v2nugv4gvn9ffak9yi1l5p1ynsy8xgmz120t4b24mahcv7559m3nc5rcru888jpsq84bbev9h8zhhps22leb33bhihkstzyjepy2fv59252q458hnnx7xr30an1beg920y1a9udohsjs4el5wjyp1fy3ik4e0btit6gr7n7rr4p7ixemo8dcnf34jnajpcyh057ey45t9yqfbg3txk7eqs954d1u5qu5mreqmktb56w02x5t676f2pjmiy76q4lfwf4bkyjygzk232iuvm4o8ejszi2mxa489rwus6lajoov8htmcffd63d5svjepcb5r4v68n7bw7cv462f8h1powbph0iktjyvdbxgu6585s953o5mkqa9v3ce25pbq42a1ggh0to2jcw4mwtf4h6gjjwnyuhxn300csg1uyg971zuh0tuq2hxyprq72cx5xg3m80cxkshwdvob4l224v350mezeslty64jfy9ntivnhvxu6rq66scv38on334a4843ctt5kpak5mtftudnb41k1m4lywhs5wswnzdezgdf46blcdlnp8wn71szeex2aubnduxx48v7r3pujzv1exjxkk9avp812rnkfg7b9xo1rv3g51trul0bs07hi7tyyrgeimncqd1gc7cf0i68nkw92xezm0s4f7gphsjqp7ufau0t3da7nlquvfcf74cseaba7okh1d3h4eawlxmhlawpz2m5i6or6ig3gl69bpuzxoxw7y1zihdd5o88bpbrb8fygygnipi38su8lvbzi141a8m24lgybx2t84jgv3o3gighx4pi2hprvxl08run5dnu9btx8w3l2gwwx6k9jir2o665ji22q9uluk85ve2ehbxx7nwoxawker1hzgsvorblavtqvfpotinn115xdqqps2rukvc6ttsjqr8nv8o6x0ga5cnjzb35pgt3xt8uvoahczyrf1jf16usg6n0m0n6pdbyccnq1ajklmzwq2t2y1xwt3czh4a7fuouj7db6w1907l0aa8yajomosahswnnkdsq34qwhuhdwmjahu3w57je0x048x5s60e3xxas7yujmic0one7q98n0gl3wrp1d9v3v8c3sj0cibujd7fq751t2217pgs994wd7kbleepvpsywskl5nb23pf1p9dbd8o0thlu0ll6v75hq1ulkddonx4foir1ruoipob5nfpcrcn46x8aofi5v7033shjwb2fmx8isuo082rl4apa9y1xll6fongh15mo9xqghbgtzpagjt8873rcqx7n3rz5nsorzp58aql5lgdk15dv71geof9nwbroxh41jajppy0u3624lfbq5rft94frsni1uqjhsj77lef9xxhvmy81o56y8zvc4yof5svacggp5ojeyzl72gczh2pxiwnkyrx0e7kaayw2g1vcvq580mwoe160bh7bzicf6xfw4o36w8ll1y4i5wpe16pwa9cnyjsixo7l1v5gm8vurpgx6zovvk4w56vpi8ck7aqx7msbysoskfxjwzg5qcbut638tobq0lsyvkfdchrpp5k8y1c454gfrepylaigjrctsnwpt216hxj44mi31vsakf4lot9uow2spaz0whcc4ea3yu1m3vhipemeyxa5wh4pnd4j87uegl9q14f1l9has5ekt53iiau47u3zoj4bbfowxbx9knh46dlhp6dkgz4g82gvl6x6hbq9pkiym8v0038u1njdrbfdc01ryrfgfvbqvxra03q7nb0j7zq3ljgc09hau7i8njmikdce61wjwke7ehm34nu1yaumgfsbt0j30h4sumb5mefpjf5c92fjkud8tox18zgutbbzr1bnor2tqfarh5z7c1terzckh7smxq56iwyj25pkc9pk5j3g81n1oas03knxdqjno0xsz3ufvocak1jmq1qlx3datwsrvfinsmapuo28u7yeyw547ab9f8pji9vvlqzp4xay024lrkyi7ibna5cngbjbp2239upfkjsao7sdz0ac59lyuezunm46hbe25wunm31dj5egw6opzxueq3y5zhn8aekt35ri24mcj6wp46978xqijt8yj2selx4lu23ttzjq28yl8bnag0fy9xax3d2nxqgjczevpqckpfnj9ato10fh0uocfrv95x3v6teds3ngd1zuplj7abv94dxyjjhvy5vz1c8q91v5sdgfeubsh0n2u66ev5y42fv6rmvzh07a0vazsfnm9sfdv4qoaj7a6gei76endnbblg4g8ms6wo0pynhiju1bse8ck8juasez1i3oxpl9zu0bnn2gje5yhgjip5mivu7kc6xvwmlvc95f47og34g245oht6md0ps1tvcxgrijscge120j8nz64m4ne4jl6maoewo55w6jq9px4d9o7jzaoy9aap8bxnlgiraki3qbq27eagogivxuyox7zna8ob616yi17yg0ijawp51gtyu8tja3j4uec3vfmrv2awhrgei70zvg9jmjspvhrj8173u2utswmmy2fp9jso8pkibd5wd8ji93i42t3r14lymn8raj1uzt22oppim3mrrox0r397gvppyr0wn95cd4ec16uf5cazxhzw0cznskzq9b5j2sij9dcyc3ono9n7zbuy3gig2i9gol5wljpn50yyiddxazhfht07tltexf9ynbsazi1wz6svvg7omquplreeeqcov88hpk74necntukj2qlshh5mycusyrgyqe25bs75c601av7lw1yzts4rf3nqotyv1lk05orvdv1ul9uo86uhof5q2kqgg1xnlp5aw1cn9zuto1deqawkdm1hlq8l6leja5pmuy2rij3aviddqc0f7f5se3fppp4chivvfv53nlzbyhl57q1hk29mv4hgwti1dwrqk9njuw8dayqfisa6r3tc2efj6sukx0iggvevx7vjbnqp7w75xzd3on2a76kmltjjmpladjpke4qrzjvwyefmnf99njrcynbi4cfgtrf75wktf156mhyz4ovbbk65cd8gsnqi9ee0o3zx9okmtowboqx94blveah5dpsnntkrfx5t0stlhmrl6y4k2vnv6cj13q5q6xbj4wo33sm4247zgbeqjl8zvkzvq6jkna0sppu98404kdpxkg87hjd71fehtx0i1qo38hmmataec2skv2twzk6zgjl15jd1engtecvyhek663dg2yb99hj1bo1n4oywmyi3vz8a1de6vuixg3zxbuvy162ktkm3a9a2qkzyxeex92zg7dwafk1y5x5j8kh8etlizg3erq7nfwrhfgzctmknroejehsah8dohclrvl2o1mdym32c4nue4sinu9oqedh3g1rvzxwrld9ncikzn8zaurm76evtixk5opsweb1xkw28cqwp41jndxsxj78 == \2\u\h\l\6\t\z\y\h\6\6\e\v\9\s\s\9\6\f\4\p\g\1\b\9\g\9\o\5\k\0\l\y\8\k\h\g\c\2\p\s\2\x\3\p\5\2\v\c\0\9\j\p\t\y\6\b\g\a\m\4\v\z\d\p\j\d\c\0\m\d\7\c\2\j\8\t\l\c\y\0\l\h\w\g\q\7\i\4\v\s\6\z\s\3\r\t\j\g\l\h\b\q\9\7\5\c\n\p\s\h\z\e\e\p\a\l\t\w\p\5\w\x\4\v\2\3\9\k\i\x\z\u\4\f\0\w\u\h\o\4\e\t\b\k\y\m\w\g\h\f\i\j\q\m\3\h\i\8\s\v\9\t\t\k\z\v\d\7\e\r\1\c\r\a\9\c\g\8\b\6\q\7\m\j\d\u\y\z\j\3\b\c\r\a\m\6\0\f\c\b\4\o\q\0\2\2\0\4\c\e\s\q\q\3\w\o\n\3\p\9\n\1\w\n\9\b\3\k\5\s\h\r\7\w\i\x\u\u\0\8\y\p\t\v\k\f\8\h\6\z\c\i\4\9\f\9\m\3\6\n\n\0\c\c\t\9\q\j\0\2\j\r\z\c\m\x\x\8\v\7\7\j\l\f\b\a\g\n\0\p\x\k\7\k\i\b\o\b\a\w\q\m\5\v\2\m\7\r\e\s\k\a\9\6\b\s\o\j\9\m\u\4\b\h\i\g\9\j\c\6\t\3\r\9\h\u\7\5\1\3\t\g\1\h\y\l\y\7\j\6\q\d\1\r\b\0\x\3\u\h\5\9\9\f\u\m\t\w\8\2\3\k\9\d\b\0\g\7\m\u\z\5\2\t\d\r\h\k\z\s\1\n\b\k\c\d\4\x\y\d\f\y\j\v\1\f\6\0\t\f\5\4\k\1\p\e\c\w\4\2\n\y\w\d\l\3\4\c\v\n\p\g\s\j\c\x\6\h\k\q\3\l\r\d\m\7\y\9\b\z\3\a\g\h\g\u\q\n\q\y\i\e\8\b\l\i\c\r\v\d\4\o\r\r\a\l\a\c\q\9\4\e\z\z\n\9\j\0\m\e\m\2\p\t\a\w\3\9\9\6\o\z\2\h\e\5\m\e\p\z\p\g\3\t\q\3\j\7\3\p\3\e\z\y\r\8\1\9\1\s\h\5\8\b\l\8\y\7\4\o\s\0\d\2\a\s\u\x\q\g\z\8\3\c\1\w\6\o\o\v\b\y\w\4\j\g\a\e\p\c\u\u\k\h\0\f\4\p\7\8\6\c\9\v\4\o\k\s\t\d\r\6\t\u\u\e\7\z\a\1\7\q\k\v\r\5\4\d\y\s\j\7\8\9\y\1\j\k\z\s\3\d\e\f\e\f\4\e\u\1\n\b\7\3\a\d\c\z\n\o\o\9\w\0\1\u\w\1\7\1\v\2\n\u\g\v\4\g\v\n\9\f\f\a\k\9\y\i\1\l\5\p\1\y\n\s\y\8\x\g\m\z\1\2\0\t\4\b\2\4\m\a\h\c\v\7\5\5\9\m\3\n\c\5\r\c\r\u\8\8\8\j\p\s\q\8\4\b\b\e\v\9\h\8\z\h\h\p\s\2\2\l\e\b\3\3\b\h\i\h\k\s\t\z\y\j\e\p\y\2\f\v\5\9\2\5\2\q\4\5\8\h\n\n\x\7\x\r\3\0\a\n\1\b\e\g\9\2\0\y\1\a\9\u\d\o\h\s\j\s\4\e\l\5\w\j\y\p\1\f\y\3\i\k\4\e\0\b\t\i\t\6\g\r\7\n\7\r\r\4\p\7\i\x\e\m\o\8\d\c\n\f\3\4\j\n\a\j\p\c\y\h\0\5\7\e\y\4\5\t\9\y\q\f\b\g\3\t\x\k\7\e\q\s\9\5\4\d\1\u\5\q\u\5\m\r\e\q\m\k\t\b\5\6\w\0\2\x\5\t\6\7\6\f\2\p\j\m\i\y\7\6\q\4\l\f\w\f\4\b\k\y\j\y\g\z\k\2\3\2\i\u\v\m\4\o\8\e\j\s\z\i\2\m\x\a\4\8\9\r\w\u\s\6\l\a\j\o\o\v\8\h\t\m\c\f\f\d\6\3\d\5\s\v\j\e\p\c\b\5\r\4\v\6\8\n\7\b\w\7\c\v\4\6\2\f\8\h\1\p\o\w\b\p\h\0\i\k\t\j\y\v\d\b\x\g\u\6\5\8\5\s\9\5\3\o\5\m\k\q\a\9\v\3\c\e\2\5\p\b\q\4\2\a\1\g\g\h\0\t\o\2\j\c\w\4\m\w\t\f\4\h\6\g\j\j\w\n\y\u\h\x\n\3\0\0\c\s\g\1\u\y\g\9\7\1\z\u\h\0\t\u\q\2\h\x\y\p\r\q\7\2\c\x\5\x\g\3\m\8\0\c\x\k\s\h\w\d\v\o\b\4\l\2\2\4\v\3\5\0\m\e\z\e\s\l\t\y\6\4\j\f\y\9\n\t\i\v\n\h\v\x\u\6\r\q\6\6\s\c\v\3\8\o\n\3\3\4\a\4\8\4\3\c\t\t\5\k\p\a\k\5\m\t\f\t\u\d\n\b\4\1\k\1\m\4\l\y\w\h\s\5\w\s\w\n\z\d\e\z\g\d\f\4\6\b\l\c\d\l\n\p\8\w\n\7\1\s\z\e\e\x\2\a\u\b\n\d\u\x\x\4\8\v\7\r\3\p\u\j\z\v\1\e\x\j\x\k\k\9\a\v\p\8\1\2\r\n\k\f\g\7\b\9\x\o\1\r\v\3\g\5\1\t\r\u\l\0\b\s\0\7\h\i\7\t\y\y\r\g\e\i\m\n\c\q\d\1\g\c\7\c\f\0\i\6\8\n\k\w\9\2\x\e\z\m\0\s\4\f\7\g\p\h\s\j\q\p\7\u\f\a\u\0\t\3\d\a\7\n\l\q\u\v\f\c\f\7\4\c\s\e\a\b\a\7\o\k\h\1\d\3\h\4\e\a\w\l\x\m\h\l\a\w\p\z\2\m\5\i\6\o\r\6\i\g\3\g\l\6\9\b\p\u\z\x\o\x\w\7\y\1\z\i\h\d\d\5\o\8\8\b\p\b\r\b\8\f\y\g\y\g\n\i\p\i\3\8\s\u\8\l\v\b\z\i\1\4\1\a\8\m\2\4\l\g\y\b\x\2\t\8\4\j\g\v\3\o\3\g\i\g\h\x\4\p\i\2\h\p\r\v\x\l\0\8\r\u\n\5\d\n\u\9\b\t\x\8\w\3\l\2\g\w\w\x\6\k\9\j\i\r\2\o\6\6\5\j\i\2\2\q\9\u\l\u\k\8\5\v\e\2\e\h\b\x\x\7\n\w\o\x\a\w\k\e\r\1\h\z\g\s\v\o\r\b\l\a\v\t\q\v\f\p\o\t\i\n\n\1\1\5\x\d\q\q\p\s\2\r\u\k\v\c\6\t\t\s\j\q\r\8\n\v\8\o\6\x\0\g\a\5\c\n\j\z\b\3\5\p\g\t\3\x\t\8\u\v\o\a\h\c\z\y\r\f\1\j\f\1\6\u\s\g\6\n\0\m\0\n\6\p\d\b\y\c\c\n\q\1\a\j\k\l\m\z\w\q\2\t\2\y\1\x\w\t\3\c\z\h\4\a\7\f\u\o\u\j\7\d\b\6\w\1\9\0\7\l\0\a\a\8\y\a\j\o\m\o\s\a\h\s\w\n\n\k\d\s\q\3\4\q\w\h\u\h\d\w\m\j\a\h\u\3\w\5\7\j\e\0\x\0\4\8\x\5\s\6\0\e\3\x\x\a\s\7\y\u\j\m\i\c\0\o\n\e\7\q\9\8\n\0\g\l\3\w\r\p\1\d\9\v\3\v\8\c\3\s\j\0\c\i\b\u\j\d\7\f\q\7\5\1\t\2\2\1\7\p\g\s\9\9\4\w\d\7\k\b\l\e\e\p\v\p\s\y\w\s\k\l\5\n\b\2\3\p\f\1\p\9\d\b\d\8\o\0\t\h\l\u\0\l\l\6\v\7\5\h\q\1\u\l\k\d\d\o\n\x\4\f\o\i\r\1\r\u\o\i\p\o\b\5\n\f\p\c\r\c\n\4\6\x\8\a\o\f\i\5\v\7\0\3\3\s\h\j\w\b\2\f\m\x\8\i\s\u\o\0\8\2\r\l\4\a\p\a\9\y\1\x\l\l\6\f\o\n\g\h\1\5\m\o\9\x\q\g\h\b\g\t\z\p\a\g\j\t\8\8\7\3\r\c\q\x\7\n\3\r\z\5\n\s\o\r\z\p\5\8\a\q\l\5\l\g\d\k\1\5\d\v\7\1\g\e\o\f\9\n\w\b\r\o\x\h\4\1\j\a\j\p\p\y\0\u\3\6\2\4\l\f\b\q\5\r\f\t\9\4\f\r\s\n\i\1\u\q\j\h\s\j\7\7\l\e\f\9\x\x\h\v\m\y\8\1\o\5\6\y\8\z\v\c\4\y\o\f\5\s\v\a\c\g\g\p\5\o\j\e\y\z\l\7\2\g\c\z\h\2\p\x\i\w\n\k\y\r\x\0\e\7\k\a\a\y\w\2\g\1\v\c\v\q\5\8\0\m\w\o\e\1\6\0\b\h\7\b\z\i\c\f\6\x\f\w\4\o\3\6\w\8\l\l\1\y\4\i\5\w\p\e\1\6\p\w\a\9\c\n\y\j\s\i\x\o\7\l\1\v\5\g\m\8\v\u\r\p\g\x\6\z\o\v\v\k\4\w\5\6\v\p\i\8\c\k\7\a\q\x\7\m\s\b\y\s\o\s\k\f\x\j\w\z\g\5\q\c\b\u\t\6\3\8\t\o\b\q\0\l\s\y\v\k\f\d\c\h\r\p\p\5\k\8\y\1\c\4\5\4\g\f\r\e\p\y\l\a\i\g\j\r\c\t\s\n\w\p\t\2\1\6\h\x\j\4\4\m\i\3\1\v\s\a\k\f\4\l\o\t\9\u\o\w\2\s\p\a\z\0\w\h\c\c\4\e\a\3\y\u\1\m\3\v\h\i\p\e\m\e\y\x\a\5\w\h\4\p\n\d\4\j\8\7\u\e\g\l\9\q\1\4\f\1\l\9\h\a\s\5\e\k\t\5\3\i\i\a\u\4\7\u\3\z\o\j\4\b\b\f\o\w\x\b\x\9\k\n\h\4\6\d\l\h\p\6\d\k\g\z\4\g\8\2\g\v\l\6\x\6\h\b\q\9\p\k\i\y\m\8\v\0\0\3\8\u\1\n\j\d\r\b\f\d\c\0\1\r\y\r\f\g\f\v\b\q\v\x\r\a\0\3\q\7\n\b\0\j\7\z\q\3\l\j\g\c\0\9\h\a\u\7\i\8\n\j\m\i\k\d\c\e\6\1\w\j\w\k\e\7\e\h\m\3\4\n\u\1\y\a\u\m\g\f\s\b\t\0\j\3\0\h\4\s\u\m\b\5\m\e\f\p\j\f\5\c\9\2\f\j\k\u\d\8\t\o\x\1\8\z\g\u\t\b\b\z\r\1\b\n\o\r\2\t\q\f\a\r\h\5\z\7\c\1\t\e\r\z\c\k\h\7\s\m\x\q\5\6\i\w\y\j\2\5\p\k\c\9\p\k\5\j\3\g\8\1\n\1\o\a\s\0\3\k\n\x\d\q\j\n\o\0\x\s\z\3\u\f\v\o\c\a\k\1\j\m\q\1\q\l\x\3\d\a\t\w\s\r\v\f\i\n\s\m\a\p\u\o\2\8\u\7\y\e\y\w\5\4\7\a\b\9\f\8\p\j\i\9\v\v\l\q\z\p\4\x\a\y\0\2\4\l\r\k\y\i\7\i\b\n\a\5\c\n\g\b\j\b\p\2\2\3\9\u\p\f\k\j\s\a\o\7\s\d\z\0\a\c\5\9\l\y\u\e\z\u\n\m\4\6\h\b\e\2\5\w\u\n\m\3\1\d\j\5\e\g\w\6\o\p\z\x\u\e\q\3\y\5\z\h\n\8\a\e\k\t\3\5\r\i\2\4\m\c\j\6\w\p\4\6\9\7\8\x\q\i\j\t\8\y\j\2\s\e\l\x\4\l\u\2\3\t\t\z\j\q\2\8\y\l\8\b\n\a\g\0\f\y\9\x\a\x\3\d\2\n\x\q\g\j\c\z\e\v\p\q\c\k\p\f\n\j\9\a\t\o\1\0\f\h\0\u\o\c\f\r\v\9\5\x\3\v\6\t\e\d\s\3\n\g\d\1\z\u\p\l\j\7\a\b\v\9\4\d\x\y\j\j\h\v\y\5\v\z\1\c\8\q\9\1\v\5\s\d\g\f\e\u\b\s\h\0\n\2\u\6\6\e\v\5\y\4\2\f\v\6\r\m\v\z\h\0\7\a\0\v\a\z\s\f\n\m\9\s\f\d\v\4\q\o\a\j\7\a\6\g\e\i\7\6\e\n\d\n\b\b\l\g\4\g\8\m\s\6\w\o\0\p\y\n\h\i\j\u\1\b\s\e\8\c\k\8\j\u\a\s\e\z\1\i\3\o\x\p\l\9\z\u\0\b\n\n\2\g\j\e\5\y\h\g\j\i\p\5\m\i\v\u\7\k\c\6\x\v\w\m\l\v\c\9\5\f\4\7\o\g\3\4\g\2\4\5\o\h\t\6\m\d\0\p\s\1\t\v\c\x\g\r\i\j\s\c\g\e\1\2\0\j\8\n\z\6\4\m\4\n\e\4\j\l\6\m\a\o\e\w\o\5\5\w\6\j\q\9\p\x\4\d\9\o\7\j\z\a\o\y\9\a\a\p\8\b\x\n\l\g\i\r\a\k\i\3\q\b\q\2\7\e\a\g\o\g\i\v\x\u\y\o\x\7\z\n\a\8\o\b\6\1\6\y\i\1\7\y\g\0\i\j\a\w\p\5\1\g\t\y\u\8\t\j\a\3\j\4\u\e\c\3\v\f\m\r\v\2\a\w\h\r\g\e\i\7\0\z\v\g\9\j\m\j\s\p\v\h\r\j\8\1\7\3\u\2\u\t\s\w\m\m\y\2\f\p\9\j\s\o\8\p\k\i\b\d\5\w\d\8\j\i\9\3\i\4\2\t\3\r\1\4\l\y\m\n\8\r\a\j\1\u\z\t\2\2\o\p\p\i\m\3\m\r\r\o\x\0\r\3\9\7\g\v\p\p\y\r\0\w\n\9\5\c\d\4\e\c\1\6\u\f\5\c\a\z\x\h\z\w\0\c\z\n\s\k\z\q\9\b\5\j\2\s\i\j\9\d\c\y\c\3\o\n\o\9\n\7\z\b\u\y\3\g\i\g\2\i\9\g\o\l\5\w\l\j\p\n\5\0\y\y\i\d\d\x\a\z\h\f\h\t\0\7\t\l\t\e\x\f\9\y\n\b\s\a\z\i\1\w\z\6\s\v\v\g\7\o\m\q\u\p\l\r\e\e\e\q\c\o\v\8\8\h\p\k\7\4\n\e\c\n\t\u\k\j\2\q\l\s\h\h\5\m\y\c\u\s\y\r\g\y\q\e\2\5\b\s\7\5\c\6\0\1\a\v\7\l\w\1\y\z\t\s\4\r\f\3\n\q\o\t\y\v\1\l\k\0\5\o\r\v\d\v\1\u\l\9\u\o\8\6\u\h\o\f\5\q\2\k\q\g\g\1\x\n\l\p\5\a\w\1\c\n\9\z\u\t\o\1\d\e\q\a\w\k\d\m\1\h\l\q\8\l\6\l\e\j\a\5\p\m\u\y\2\r\i\j\3\a\v\i\d\d\q\c\0\f\7\f\5\s\e\3\f\p\p\p\4\c\h\i\v\v\f\v\5\3\n\l\z\b\y\h\l\5\7\q\1\h\k\2\9\m\v\4\h\g\w\t\i\1\d\w\r\q\k\9\n\j\u\w\8\d\a\y\q\f\i\s\a\6\r\3\t\c\2\e\f\j\6\s\u\k\x\0\i\g\g\v\e\v\x\7\v\j\b\n\q\p\7\w\7\5\x\z\d\3\o\n\2\a\7\6\k\m\l\t\j\j\m\p\l\a\d\j\p\k\e\4\q\r\z\j\v\w\y\e\f\m\n\f\9\9\n\j\r\c\y\n\b\i\4\c\f\g\t\r\f\7\5\w\k\t\f\1\5\6\m\h\y\z\4\o\v\b\b\k\6\5\c\d\8\g\s\n\q\i\9\e\e\0\o\3\z\x\9\o\k\m\t\o\w\b\o\q\x\9\4\b\l\v\e\a\h\5\d\p\s\n\n\t\k\r\f\x\5\t\0\s\t\l\h\m\r\l\6\y\4\k\2\v\n\v\6\c\j\1\3\q\5\q\6\x\b\j\4\w\o\3\3\s\m\4\2\4\7\z\g\b\e\q\j\l\8\z\v\k\z\v\q\6\j\k\n\a\0\s\p\p\u\9\8\4\0\4\k\d\p\x\k\g\8\7\h\j\d\7\1\f\e\h\t\x\0\i\1\q\o\3\8\h\m\m\a\t\a\e\c\2\s\k\v\2\t\w\z\k\6\z\g\j\l\1\5\j\d\1\e\n\g\t\e\c\v\y\h\e\k\6\6\3\d\g\2\y\b\9\9\h\j\1\b\o\1\n\4\o\y\w\m\y\i\3\v\z\8\a\1\d\e\6\v\u\i\x\g\3\z\x\b\u\v\y\1\6\2\k\t\k\m\3\a\9\a\2\q\k\z\y\x\e\e\x\9\2\z\g\7\d\w\a\f\k\1\y\5\x\5\j\8\k\h\8\e\t\l\i\z\g\3\e\r\q\7\n\f\w\r\h\f\g\z\c\t\m\k\n\r\o\e\j\e\h\s\a\h\8\d\o\h\c\l\r\v\l\2\o\1\m\d\y\m\3\2\c\4\n\u\e\4\s\i\n\u\9\o\q\e\d\h\3\g\1\r\v\z\x\w\r\l\d\9\n\c\i\k\z\n\8\z\a\u\r\m\7\6\e\v\t\i\x\k\5\o\p\s\w\e\b\1\x\k\w\2\8\c\q\w\p\4\1\j\n\d\x\s\x\j\7\8 ]] 00:05:59.638 00:05:59.638 real 0m1.462s 00:05:59.638 user 0m1.010s 00:05:59.638 sys 0m0.627s 00:05:59.638 03:08:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.638 03:08:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:59.638 ************************************ 00:05:59.638 END TEST dd_rw_offset 00:05:59.638 ************************************ 00:05:59.638 03:08:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:05:59.638 03:08:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:05:59.638 03:08:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:59.638 03:08:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:59.638 03:08:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:05:59.638 03:08:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:59.638 03:08:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:05:59.638 03:08:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:59.638 03:08:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:05:59.638 03:08:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:59.638 03:08:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:59.638 [2024-10-09 03:08:42.828613] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:59.638 [2024-10-09 03:08:42.828726] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60090 ] 00:05:59.638 { 00:05:59.638 "subsystems": [ 00:05:59.638 { 00:05:59.638 "subsystem": "bdev", 00:05:59.638 "config": [ 00:05:59.638 { 00:05:59.638 "params": { 00:05:59.638 "trtype": "pcie", 00:05:59.638 "traddr": "0000:00:10.0", 00:05:59.638 "name": "Nvme0" 00:05:59.638 }, 00:05:59.638 "method": "bdev_nvme_attach_controller" 00:05:59.638 }, 00:05:59.638 { 00:05:59.638 "method": "bdev_wait_for_examine" 00:05:59.638 } 00:05:59.638 ] 00:05:59.638 } 00:05:59.638 ] 00:05:59.638 } 00:05:59.896 [2024-10-09 03:08:42.968661] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.897 [2024-10-09 03:08:43.063462] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.897 [2024-10-09 03:08:43.139653] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:00.155  [2024-10-09T03:08:43.717Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:00.414 00:06:00.414 03:08:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:00.414 00:06:00.414 real 0m19.771s 00:06:00.414 user 0m14.174s 00:06:00.414 sys 0m7.402s 00:06:00.414 03:08:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.414 03:08:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:00.414 ************************************ 00:06:00.414 END TEST spdk_dd_basic_rw 00:06:00.414 ************************************ 00:06:00.414 03:08:43 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:00.414 03:08:43 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.414 03:08:43 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.414 03:08:43 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:00.414 ************************************ 00:06:00.414 START TEST spdk_dd_posix 00:06:00.414 ************************************ 00:06:00.414 03:08:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:00.414 * Looking for test storage... 00:06:00.414 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:00.414 03:08:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:00.414 03:08:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # lcov --version 00:06:00.414 03:08:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:00.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.673 --rc genhtml_branch_coverage=1 00:06:00.673 --rc genhtml_function_coverage=1 00:06:00.673 --rc genhtml_legend=1 00:06:00.673 --rc geninfo_all_blocks=1 00:06:00.673 --rc geninfo_unexecuted_blocks=1 00:06:00.673 00:06:00.673 ' 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:00.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.673 --rc genhtml_branch_coverage=1 00:06:00.673 --rc genhtml_function_coverage=1 00:06:00.673 --rc genhtml_legend=1 00:06:00.673 --rc geninfo_all_blocks=1 00:06:00.673 --rc geninfo_unexecuted_blocks=1 00:06:00.673 00:06:00.673 ' 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:00.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.673 --rc genhtml_branch_coverage=1 00:06:00.673 --rc genhtml_function_coverage=1 00:06:00.673 --rc genhtml_legend=1 00:06:00.673 --rc geninfo_all_blocks=1 00:06:00.673 --rc geninfo_unexecuted_blocks=1 00:06:00.673 00:06:00.673 ' 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:00.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.673 --rc genhtml_branch_coverage=1 00:06:00.673 --rc genhtml_function_coverage=1 00:06:00.673 --rc genhtml_legend=1 00:06:00.673 --rc geninfo_all_blocks=1 00:06:00.673 --rc geninfo_unexecuted_blocks=1 00:06:00.673 00:06:00.673 ' 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:00.673 03:08:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:00.674 03:08:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:00.674 03:08:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:00.674 * First test run, liburing in use 00:06:00.674 03:08:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:00.674 03:08:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.674 03:08:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.674 03:08:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:00.674 ************************************ 00:06:00.674 START TEST dd_flag_append 00:06:00.674 ************************************ 00:06:00.674 03:08:43 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1125 -- # append 00:06:00.674 03:08:43 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:00.674 03:08:43 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:00.674 03:08:43 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:00.674 03:08:43 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:00.674 03:08:43 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:00.674 03:08:43 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=q5ot99vnia42x2sc99ilc7vksxse8i4a 00:06:00.674 03:08:43 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:00.674 03:08:43 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:00.674 03:08:43 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:00.674 03:08:43 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=g7piq8dbnq1fuxjxxg27isdlfqcshf6h 00:06:00.674 03:08:43 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s q5ot99vnia42x2sc99ilc7vksxse8i4a 00:06:00.674 03:08:43 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s g7piq8dbnq1fuxjxxg27isdlfqcshf6h 00:06:00.674 03:08:43 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:00.674 [2024-10-09 03:08:43.870596] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:00.674 [2024-10-09 03:08:43.870694] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60162 ] 00:06:00.932 [2024-10-09 03:08:44.012926] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.932 [2024-10-09 03:08:44.125579] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.932 [2024-10-09 03:08:44.195354] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.192  [2024-10-09T03:08:44.754Z] Copying: 32/32 [B] (average 31 kBps) 00:06:01.452 00:06:01.452 03:08:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ g7piq8dbnq1fuxjxxg27isdlfqcshf6hq5ot99vnia42x2sc99ilc7vksxse8i4a == \g\7\p\i\q\8\d\b\n\q\1\f\u\x\j\x\x\g\2\7\i\s\d\l\f\q\c\s\h\f\6\h\q\5\o\t\9\9\v\n\i\a\4\2\x\2\s\c\9\9\i\l\c\7\v\k\s\x\s\e\8\i\4\a ]] 00:06:01.452 00:06:01.452 real 0m0.697s 00:06:01.452 user 0m0.405s 00:06:01.452 sys 0m0.344s 00:06:01.452 03:08:44 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.452 03:08:44 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:01.452 ************************************ 00:06:01.452 END TEST dd_flag_append 00:06:01.452 ************************************ 00:06:01.452 03:08:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:01.452 03:08:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.452 03:08:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.452 03:08:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:01.452 ************************************ 00:06:01.452 START TEST dd_flag_directory 00:06:01.452 ************************************ 00:06:01.452 03:08:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1125 -- # directory 00:06:01.452 03:08:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:01.452 03:08:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:06:01.452 03:08:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:01.452 03:08:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:01.452 03:08:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.452 03:08:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:01.452 03:08:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.452 03:08:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:01.452 03:08:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.452 03:08:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:01.452 03:08:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:01.452 03:08:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:01.452 [2024-10-09 03:08:44.606483] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:01.452 [2024-10-09 03:08:44.606560] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60186 ] 00:06:01.452 [2024-10-09 03:08:44.737648] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.711 [2024-10-09 03:08:44.870144] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.711 [2024-10-09 03:08:44.941838] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.711 [2024-10-09 03:08:44.985435] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:01.711 [2024-10-09 03:08:44.985503] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:01.711 [2024-10-09 03:08:44.985516] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:01.970 [2024-10-09 03:08:45.149194] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:01.970 03:08:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:06:01.970 03:08:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:01.970 03:08:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:06:01.970 03:08:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:06:01.970 03:08:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:06:01.970 03:08:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:01.971 03:08:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:01.971 03:08:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:06:01.971 03:08:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:01.971 03:08:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:01.971 03:08:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.971 03:08:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:01.971 03:08:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.971 03:08:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:01.971 03:08:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.971 03:08:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:01.971 03:08:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:01.971 03:08:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:02.230 [2024-10-09 03:08:45.329247] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:02.230 [2024-10-09 03:08:45.329355] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60201 ] 00:06:02.230 [2024-10-09 03:08:45.465513] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.489 [2024-10-09 03:08:45.571644] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.489 [2024-10-09 03:08:45.643262] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:02.489 [2024-10-09 03:08:45.689755] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:02.489 [2024-10-09 03:08:45.689815] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:02.489 [2024-10-09 03:08:45.689830] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:02.751 [2024-10-09 03:08:45.856843] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:02.751 03:08:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:06:02.751 03:08:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:02.751 03:08:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:06:02.751 03:08:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:06:02.751 03:08:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:06:02.751 03:08:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:02.751 00:06:02.751 real 0m1.408s 00:06:02.751 user 0m0.821s 00:06:02.751 sys 0m0.376s 00:06:02.751 03:08:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.751 03:08:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:02.751 ************************************ 00:06:02.751 END TEST dd_flag_directory 00:06:02.751 ************************************ 00:06:02.751 03:08:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:02.751 03:08:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.751 03:08:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.751 03:08:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:02.751 ************************************ 00:06:02.751 START TEST dd_flag_nofollow 00:06:02.751 ************************************ 00:06:02.751 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1125 -- # nofollow 00:06:02.751 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:02.751 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:02.751 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:02.751 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:02.751 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:02.751 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:06:02.751 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:02.751 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.751 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.751 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.751 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.751 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.751 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.751 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.751 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:02.751 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:03.013 [2024-10-09 03:08:46.081957] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:03.013 [2024-10-09 03:08:46.082075] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60235 ] 00:06:03.013 [2024-10-09 03:08:46.219532] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.273 [2024-10-09 03:08:46.319438] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.273 [2024-10-09 03:08:46.393679] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:03.273 [2024-10-09 03:08:46.439037] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:03.273 [2024-10-09 03:08:46.439113] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:03.273 [2024-10-09 03:08:46.439128] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:03.533 [2024-10-09 03:08:46.608591] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:03.533 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:06:03.533 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:03.533 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:06:03.533 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:06:03.533 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:06:03.533 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:03.533 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:03.533 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:06:03.533 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:03.533 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:03.533 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.533 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:03.533 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.533 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:03.533 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.533 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:03.533 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:03.533 03:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:03.533 [2024-10-09 03:08:46.790154] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:03.533 [2024-10-09 03:08:46.790251] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60241 ] 00:06:03.792 [2024-10-09 03:08:46.925211] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.792 [2024-10-09 03:08:47.033075] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.052 [2024-10-09 03:08:47.104121] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:04.052 [2024-10-09 03:08:47.147000] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:04.052 [2024-10-09 03:08:47.147066] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:04.052 [2024-10-09 03:08:47.147082] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:04.052 [2024-10-09 03:08:47.309266] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:04.310 03:08:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:06:04.310 03:08:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:04.310 03:08:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:06:04.310 03:08:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:06:04.310 03:08:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:06:04.310 03:08:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:04.310 03:08:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:04.310 03:08:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:04.310 03:08:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:04.310 03:08:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:04.310 [2024-10-09 03:08:47.481703] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:04.310 [2024-10-09 03:08:47.481825] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60254 ] 00:06:04.609 [2024-10-09 03:08:47.615698] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.609 [2024-10-09 03:08:47.728195] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.609 [2024-10-09 03:08:47.801552] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:04.609  [2024-10-09T03:08:48.171Z] Copying: 512/512 [B] (average 500 kBps) 00:06:04.868 00:06:04.869 03:08:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 0zb8og4dkx48kp83zfx52aern7jjcb770i5ea3w1tkxrz553to1bl5ekpjlp7yrm6wm7igsptu4yd73hiynxhb8kc9yy3h206pwxu9hakn136k8dv4csncs5f1q3ottry1igvikvx7aev1h1i7j4a9hj4jqq740n04pz9ah6fd5l2c6wdmh8dmbwbk9ydvsj9xiwlfwku2kgvqtnsp48s7jw0rg2e02aoqkogqj76tk1a0eh9vu7i30x4oyc6enq2m1ox3eywtlh88il29914d88cs46d2viysegjpawwuk5tzdi1wp52slu7la21bjo8kqi9bkx9bhbyhmxa8qhpcso5vk00g3i05y9vyw0mdj6u7ky4wv6eqfcw6xvfp38s2tb2ix7mqht6t3uk14ustvzs4mii4978on5cjq0xlz8j4f88q9kfw3alg1bn9mbx5uinmwuhvvu63wvtibw9i9y2i1351fmgtyejlr920jez5opzmvhi1pgfsxn33uz == \0\z\b\8\o\g\4\d\k\x\4\8\k\p\8\3\z\f\x\5\2\a\e\r\n\7\j\j\c\b\7\7\0\i\5\e\a\3\w\1\t\k\x\r\z\5\5\3\t\o\1\b\l\5\e\k\p\j\l\p\7\y\r\m\6\w\m\7\i\g\s\p\t\u\4\y\d\7\3\h\i\y\n\x\h\b\8\k\c\9\y\y\3\h\2\0\6\p\w\x\u\9\h\a\k\n\1\3\6\k\8\d\v\4\c\s\n\c\s\5\f\1\q\3\o\t\t\r\y\1\i\g\v\i\k\v\x\7\a\e\v\1\h\1\i\7\j\4\a\9\h\j\4\j\q\q\7\4\0\n\0\4\p\z\9\a\h\6\f\d\5\l\2\c\6\w\d\m\h\8\d\m\b\w\b\k\9\y\d\v\s\j\9\x\i\w\l\f\w\k\u\2\k\g\v\q\t\n\s\p\4\8\s\7\j\w\0\r\g\2\e\0\2\a\o\q\k\o\g\q\j\7\6\t\k\1\a\0\e\h\9\v\u\7\i\3\0\x\4\o\y\c\6\e\n\q\2\m\1\o\x\3\e\y\w\t\l\h\8\8\i\l\2\9\9\1\4\d\8\8\c\s\4\6\d\2\v\i\y\s\e\g\j\p\a\w\w\u\k\5\t\z\d\i\1\w\p\5\2\s\l\u\7\l\a\2\1\b\j\o\8\k\q\i\9\b\k\x\9\b\h\b\y\h\m\x\a\8\q\h\p\c\s\o\5\v\k\0\0\g\3\i\0\5\y\9\v\y\w\0\m\d\j\6\u\7\k\y\4\w\v\6\e\q\f\c\w\6\x\v\f\p\3\8\s\2\t\b\2\i\x\7\m\q\h\t\6\t\3\u\k\1\4\u\s\t\v\z\s\4\m\i\i\4\9\7\8\o\n\5\c\j\q\0\x\l\z\8\j\4\f\8\8\q\9\k\f\w\3\a\l\g\1\b\n\9\m\b\x\5\u\i\n\m\w\u\h\v\v\u\6\3\w\v\t\i\b\w\9\i\9\y\2\i\1\3\5\1\f\m\g\t\y\e\j\l\r\9\2\0\j\e\z\5\o\p\z\m\v\h\i\1\p\g\f\s\x\n\3\3\u\z ]] 00:06:04.869 ************************************ 00:06:04.869 END TEST dd_flag_nofollow 00:06:04.869 ************************************ 00:06:04.869 00:06:04.869 real 0m2.105s 00:06:04.869 user 0m1.211s 00:06:04.869 sys 0m0.758s 00:06:04.869 03:08:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.869 03:08:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:04.869 03:08:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:04.869 03:08:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.869 03:08:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.869 03:08:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:04.869 ************************************ 00:06:04.869 START TEST dd_flag_noatime 00:06:04.869 ************************************ 00:06:04.869 03:08:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1125 -- # noatime 00:06:04.869 03:08:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:04.869 03:08:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:04.869 03:08:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:04.869 03:08:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:04.869 03:08:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:05.127 03:08:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:05.127 03:08:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1728443327 00:06:05.127 03:08:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:05.127 03:08:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1728443328 00:06:05.127 03:08:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:06.063 03:08:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:06.063 [2024-10-09 03:08:49.247039] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:06.063 [2024-10-09 03:08:49.247165] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60302 ] 00:06:06.322 [2024-10-09 03:08:49.386126] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.322 [2024-10-09 03:08:49.505160] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.322 [2024-10-09 03:08:49.575982] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:06.323  [2024-10-09T03:08:49.884Z] Copying: 512/512 [B] (average 500 kBps) 00:06:06.581 00:06:06.581 03:08:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:06.581 03:08:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1728443327 )) 00:06:06.581 03:08:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:06.581 03:08:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1728443328 )) 00:06:06.581 03:08:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:06.840 [2024-10-09 03:08:49.917613] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:06.840 [2024-10-09 03:08:49.917689] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60316 ] 00:06:06.840 [2024-10-09 03:08:50.047396] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.840 [2024-10-09 03:08:50.141768] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.099 [2024-10-09 03:08:50.213373] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:07.099  [2024-10-09T03:08:50.661Z] Copying: 512/512 [B] (average 500 kBps) 00:06:07.358 00:06:07.358 03:08:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:07.358 03:08:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1728443330 )) 00:06:07.358 00:06:07.358 real 0m2.396s 00:06:07.358 user 0m0.796s 00:06:07.358 sys 0m0.716s 00:06:07.358 03:08:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.358 ************************************ 00:06:07.358 END TEST dd_flag_noatime 00:06:07.358 03:08:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:07.358 ************************************ 00:06:07.358 03:08:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:07.358 03:08:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.358 03:08:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.358 03:08:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:07.358 ************************************ 00:06:07.358 START TEST dd_flags_misc 00:06:07.358 ************************************ 00:06:07.358 03:08:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1125 -- # io 00:06:07.358 03:08:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:07.358 03:08:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:07.358 03:08:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:07.358 03:08:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:07.358 03:08:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:07.358 03:08:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:07.358 03:08:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:07.358 03:08:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:07.358 03:08:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:07.617 [2024-10-09 03:08:50.667189] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:07.617 [2024-10-09 03:08:50.667265] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60344 ] 00:06:07.617 [2024-10-09 03:08:50.793950] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.617 [2024-10-09 03:08:50.884639] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.877 [2024-10-09 03:08:50.952969] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:07.877  [2024-10-09T03:08:51.439Z] Copying: 512/512 [B] (average 500 kBps) 00:06:08.136 00:06:08.136 03:08:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ r7xzzk2mv2joecvr2go98dk8x0r96fdn9kvhlnvxte5gra8004q85mwxqo5dv1yivv5upg96qbe519co1rx1la8ub2pg1jejsm7cqz0bw32uw35uzaubtsqctn5m118dyw6agopep2j8ep1lvz53v8q7rpyo3te9143b2q52e770vp194o1ffm544ts8uw9usz6ij0ql47y4zxgyimco36yhjakxf4rq1h34dbhfd6bm9lqf7x68chmo8inomzr5snjjkot5o3qxvpe4nzlahlbsm49sevmjvsaj9ng3qcdxthsvxt1pcyjvu7f8b2x557geglzn8sqhrnmesn3cn7rc27yzgrk42t0dl86fqp9z1iuml85cgd6hr38ih8ghra35bf3sb42bz501j3drj8fk0gmox25ykkizm55q0dz45rz2lzfsfz5we1oxi5i8b05485jd3rzd9ptd7htg3wzdeilygyw9g5ou5yw4brefka5fqf9ubok1rzzsxqnu == \r\7\x\z\z\k\2\m\v\2\j\o\e\c\v\r\2\g\o\9\8\d\k\8\x\0\r\9\6\f\d\n\9\k\v\h\l\n\v\x\t\e\5\g\r\a\8\0\0\4\q\8\5\m\w\x\q\o\5\d\v\1\y\i\v\v\5\u\p\g\9\6\q\b\e\5\1\9\c\o\1\r\x\1\l\a\8\u\b\2\p\g\1\j\e\j\s\m\7\c\q\z\0\b\w\3\2\u\w\3\5\u\z\a\u\b\t\s\q\c\t\n\5\m\1\1\8\d\y\w\6\a\g\o\p\e\p\2\j\8\e\p\1\l\v\z\5\3\v\8\q\7\r\p\y\o\3\t\e\9\1\4\3\b\2\q\5\2\e\7\7\0\v\p\1\9\4\o\1\f\f\m\5\4\4\t\s\8\u\w\9\u\s\z\6\i\j\0\q\l\4\7\y\4\z\x\g\y\i\m\c\o\3\6\y\h\j\a\k\x\f\4\r\q\1\h\3\4\d\b\h\f\d\6\b\m\9\l\q\f\7\x\6\8\c\h\m\o\8\i\n\o\m\z\r\5\s\n\j\j\k\o\t\5\o\3\q\x\v\p\e\4\n\z\l\a\h\l\b\s\m\4\9\s\e\v\m\j\v\s\a\j\9\n\g\3\q\c\d\x\t\h\s\v\x\t\1\p\c\y\j\v\u\7\f\8\b\2\x\5\5\7\g\e\g\l\z\n\8\s\q\h\r\n\m\e\s\n\3\c\n\7\r\c\2\7\y\z\g\r\k\4\2\t\0\d\l\8\6\f\q\p\9\z\1\i\u\m\l\8\5\c\g\d\6\h\r\3\8\i\h\8\g\h\r\a\3\5\b\f\3\s\b\4\2\b\z\5\0\1\j\3\d\r\j\8\f\k\0\g\m\o\x\2\5\y\k\k\i\z\m\5\5\q\0\d\z\4\5\r\z\2\l\z\f\s\f\z\5\w\e\1\o\x\i\5\i\8\b\0\5\4\8\5\j\d\3\r\z\d\9\p\t\d\7\h\t\g\3\w\z\d\e\i\l\y\g\y\w\9\g\5\o\u\5\y\w\4\b\r\e\f\k\a\5\f\q\f\9\u\b\o\k\1\r\z\z\s\x\q\n\u ]] 00:06:08.136 03:08:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:08.136 03:08:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:08.136 [2024-10-09 03:08:51.299942] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:08.136 [2024-10-09 03:08:51.300026] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60359 ] 00:06:08.136 [2024-10-09 03:08:51.431559] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.396 [2024-10-09 03:08:51.526256] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.396 [2024-10-09 03:08:51.594487] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:08.396  [2024-10-09T03:08:51.958Z] Copying: 512/512 [B] (average 500 kBps) 00:06:08.655 00:06:08.655 03:08:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ r7xzzk2mv2joecvr2go98dk8x0r96fdn9kvhlnvxte5gra8004q85mwxqo5dv1yivv5upg96qbe519co1rx1la8ub2pg1jejsm7cqz0bw32uw35uzaubtsqctn5m118dyw6agopep2j8ep1lvz53v8q7rpyo3te9143b2q52e770vp194o1ffm544ts8uw9usz6ij0ql47y4zxgyimco36yhjakxf4rq1h34dbhfd6bm9lqf7x68chmo8inomzr5snjjkot5o3qxvpe4nzlahlbsm49sevmjvsaj9ng3qcdxthsvxt1pcyjvu7f8b2x557geglzn8sqhrnmesn3cn7rc27yzgrk42t0dl86fqp9z1iuml85cgd6hr38ih8ghra35bf3sb42bz501j3drj8fk0gmox25ykkizm55q0dz45rz2lzfsfz5we1oxi5i8b05485jd3rzd9ptd7htg3wzdeilygyw9g5ou5yw4brefka5fqf9ubok1rzzsxqnu == \r\7\x\z\z\k\2\m\v\2\j\o\e\c\v\r\2\g\o\9\8\d\k\8\x\0\r\9\6\f\d\n\9\k\v\h\l\n\v\x\t\e\5\g\r\a\8\0\0\4\q\8\5\m\w\x\q\o\5\d\v\1\y\i\v\v\5\u\p\g\9\6\q\b\e\5\1\9\c\o\1\r\x\1\l\a\8\u\b\2\p\g\1\j\e\j\s\m\7\c\q\z\0\b\w\3\2\u\w\3\5\u\z\a\u\b\t\s\q\c\t\n\5\m\1\1\8\d\y\w\6\a\g\o\p\e\p\2\j\8\e\p\1\l\v\z\5\3\v\8\q\7\r\p\y\o\3\t\e\9\1\4\3\b\2\q\5\2\e\7\7\0\v\p\1\9\4\o\1\f\f\m\5\4\4\t\s\8\u\w\9\u\s\z\6\i\j\0\q\l\4\7\y\4\z\x\g\y\i\m\c\o\3\6\y\h\j\a\k\x\f\4\r\q\1\h\3\4\d\b\h\f\d\6\b\m\9\l\q\f\7\x\6\8\c\h\m\o\8\i\n\o\m\z\r\5\s\n\j\j\k\o\t\5\o\3\q\x\v\p\e\4\n\z\l\a\h\l\b\s\m\4\9\s\e\v\m\j\v\s\a\j\9\n\g\3\q\c\d\x\t\h\s\v\x\t\1\p\c\y\j\v\u\7\f\8\b\2\x\5\5\7\g\e\g\l\z\n\8\s\q\h\r\n\m\e\s\n\3\c\n\7\r\c\2\7\y\z\g\r\k\4\2\t\0\d\l\8\6\f\q\p\9\z\1\i\u\m\l\8\5\c\g\d\6\h\r\3\8\i\h\8\g\h\r\a\3\5\b\f\3\s\b\4\2\b\z\5\0\1\j\3\d\r\j\8\f\k\0\g\m\o\x\2\5\y\k\k\i\z\m\5\5\q\0\d\z\4\5\r\z\2\l\z\f\s\f\z\5\w\e\1\o\x\i\5\i\8\b\0\5\4\8\5\j\d\3\r\z\d\9\p\t\d\7\h\t\g\3\w\z\d\e\i\l\y\g\y\w\9\g\5\o\u\5\y\w\4\b\r\e\f\k\a\5\f\q\f\9\u\b\o\k\1\r\z\z\s\x\q\n\u ]] 00:06:08.655 03:08:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:08.655 03:08:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:08.655 [2024-10-09 03:08:51.942262] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:08.655 [2024-10-09 03:08:51.942366] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60369 ] 00:06:08.914 [2024-10-09 03:08:52.078938] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.914 [2024-10-09 03:08:52.167195] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.173 [2024-10-09 03:08:52.235296] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:09.173  [2024-10-09T03:08:52.736Z] Copying: 512/512 [B] (average 100 kBps) 00:06:09.433 00:06:09.433 03:08:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ r7xzzk2mv2joecvr2go98dk8x0r96fdn9kvhlnvxte5gra8004q85mwxqo5dv1yivv5upg96qbe519co1rx1la8ub2pg1jejsm7cqz0bw32uw35uzaubtsqctn5m118dyw6agopep2j8ep1lvz53v8q7rpyo3te9143b2q52e770vp194o1ffm544ts8uw9usz6ij0ql47y4zxgyimco36yhjakxf4rq1h34dbhfd6bm9lqf7x68chmo8inomzr5snjjkot5o3qxvpe4nzlahlbsm49sevmjvsaj9ng3qcdxthsvxt1pcyjvu7f8b2x557geglzn8sqhrnmesn3cn7rc27yzgrk42t0dl86fqp9z1iuml85cgd6hr38ih8ghra35bf3sb42bz501j3drj8fk0gmox25ykkizm55q0dz45rz2lzfsfz5we1oxi5i8b05485jd3rzd9ptd7htg3wzdeilygyw9g5ou5yw4brefka5fqf9ubok1rzzsxqnu == \r\7\x\z\z\k\2\m\v\2\j\o\e\c\v\r\2\g\o\9\8\d\k\8\x\0\r\9\6\f\d\n\9\k\v\h\l\n\v\x\t\e\5\g\r\a\8\0\0\4\q\8\5\m\w\x\q\o\5\d\v\1\y\i\v\v\5\u\p\g\9\6\q\b\e\5\1\9\c\o\1\r\x\1\l\a\8\u\b\2\p\g\1\j\e\j\s\m\7\c\q\z\0\b\w\3\2\u\w\3\5\u\z\a\u\b\t\s\q\c\t\n\5\m\1\1\8\d\y\w\6\a\g\o\p\e\p\2\j\8\e\p\1\l\v\z\5\3\v\8\q\7\r\p\y\o\3\t\e\9\1\4\3\b\2\q\5\2\e\7\7\0\v\p\1\9\4\o\1\f\f\m\5\4\4\t\s\8\u\w\9\u\s\z\6\i\j\0\q\l\4\7\y\4\z\x\g\y\i\m\c\o\3\6\y\h\j\a\k\x\f\4\r\q\1\h\3\4\d\b\h\f\d\6\b\m\9\l\q\f\7\x\6\8\c\h\m\o\8\i\n\o\m\z\r\5\s\n\j\j\k\o\t\5\o\3\q\x\v\p\e\4\n\z\l\a\h\l\b\s\m\4\9\s\e\v\m\j\v\s\a\j\9\n\g\3\q\c\d\x\t\h\s\v\x\t\1\p\c\y\j\v\u\7\f\8\b\2\x\5\5\7\g\e\g\l\z\n\8\s\q\h\r\n\m\e\s\n\3\c\n\7\r\c\2\7\y\z\g\r\k\4\2\t\0\d\l\8\6\f\q\p\9\z\1\i\u\m\l\8\5\c\g\d\6\h\r\3\8\i\h\8\g\h\r\a\3\5\b\f\3\s\b\4\2\b\z\5\0\1\j\3\d\r\j\8\f\k\0\g\m\o\x\2\5\y\k\k\i\z\m\5\5\q\0\d\z\4\5\r\z\2\l\z\f\s\f\z\5\w\e\1\o\x\i\5\i\8\b\0\5\4\8\5\j\d\3\r\z\d\9\p\t\d\7\h\t\g\3\w\z\d\e\i\l\y\g\y\w\9\g\5\o\u\5\y\w\4\b\r\e\f\k\a\5\f\q\f\9\u\b\o\k\1\r\z\z\s\x\q\n\u ]] 00:06:09.433 03:08:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:09.433 03:08:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:09.433 [2024-10-09 03:08:52.632364] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:09.433 [2024-10-09 03:08:52.632481] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60378 ] 00:06:09.692 [2024-10-09 03:08:52.763564] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.692 [2024-10-09 03:08:52.849019] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.692 [2024-10-09 03:08:52.918506] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:09.692  [2024-10-09T03:08:53.254Z] Copying: 512/512 [B] (average 250 kBps) 00:06:09.951 00:06:09.951 03:08:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ r7xzzk2mv2joecvr2go98dk8x0r96fdn9kvhlnvxte5gra8004q85mwxqo5dv1yivv5upg96qbe519co1rx1la8ub2pg1jejsm7cqz0bw32uw35uzaubtsqctn5m118dyw6agopep2j8ep1lvz53v8q7rpyo3te9143b2q52e770vp194o1ffm544ts8uw9usz6ij0ql47y4zxgyimco36yhjakxf4rq1h34dbhfd6bm9lqf7x68chmo8inomzr5snjjkot5o3qxvpe4nzlahlbsm49sevmjvsaj9ng3qcdxthsvxt1pcyjvu7f8b2x557geglzn8sqhrnmesn3cn7rc27yzgrk42t0dl86fqp9z1iuml85cgd6hr38ih8ghra35bf3sb42bz501j3drj8fk0gmox25ykkizm55q0dz45rz2lzfsfz5we1oxi5i8b05485jd3rzd9ptd7htg3wzdeilygyw9g5ou5yw4brefka5fqf9ubok1rzzsxqnu == \r\7\x\z\z\k\2\m\v\2\j\o\e\c\v\r\2\g\o\9\8\d\k\8\x\0\r\9\6\f\d\n\9\k\v\h\l\n\v\x\t\e\5\g\r\a\8\0\0\4\q\8\5\m\w\x\q\o\5\d\v\1\y\i\v\v\5\u\p\g\9\6\q\b\e\5\1\9\c\o\1\r\x\1\l\a\8\u\b\2\p\g\1\j\e\j\s\m\7\c\q\z\0\b\w\3\2\u\w\3\5\u\z\a\u\b\t\s\q\c\t\n\5\m\1\1\8\d\y\w\6\a\g\o\p\e\p\2\j\8\e\p\1\l\v\z\5\3\v\8\q\7\r\p\y\o\3\t\e\9\1\4\3\b\2\q\5\2\e\7\7\0\v\p\1\9\4\o\1\f\f\m\5\4\4\t\s\8\u\w\9\u\s\z\6\i\j\0\q\l\4\7\y\4\z\x\g\y\i\m\c\o\3\6\y\h\j\a\k\x\f\4\r\q\1\h\3\4\d\b\h\f\d\6\b\m\9\l\q\f\7\x\6\8\c\h\m\o\8\i\n\o\m\z\r\5\s\n\j\j\k\o\t\5\o\3\q\x\v\p\e\4\n\z\l\a\h\l\b\s\m\4\9\s\e\v\m\j\v\s\a\j\9\n\g\3\q\c\d\x\t\h\s\v\x\t\1\p\c\y\j\v\u\7\f\8\b\2\x\5\5\7\g\e\g\l\z\n\8\s\q\h\r\n\m\e\s\n\3\c\n\7\r\c\2\7\y\z\g\r\k\4\2\t\0\d\l\8\6\f\q\p\9\z\1\i\u\m\l\8\5\c\g\d\6\h\r\3\8\i\h\8\g\h\r\a\3\5\b\f\3\s\b\4\2\b\z\5\0\1\j\3\d\r\j\8\f\k\0\g\m\o\x\2\5\y\k\k\i\z\m\5\5\q\0\d\z\4\5\r\z\2\l\z\f\s\f\z\5\w\e\1\o\x\i\5\i\8\b\0\5\4\8\5\j\d\3\r\z\d\9\p\t\d\7\h\t\g\3\w\z\d\e\i\l\y\g\y\w\9\g\5\o\u\5\y\w\4\b\r\e\f\k\a\5\f\q\f\9\u\b\o\k\1\r\z\z\s\x\q\n\u ]] 00:06:09.951 03:08:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:09.951 03:08:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:09.951 03:08:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:09.951 03:08:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:09.951 03:08:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:09.951 03:08:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:10.211 [2024-10-09 03:08:53.306075] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:10.211 [2024-10-09 03:08:53.306193] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60393 ] 00:06:10.211 [2024-10-09 03:08:53.442344] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.470 [2024-10-09 03:08:53.532346] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.470 [2024-10-09 03:08:53.604658] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:10.470  [2024-10-09T03:08:54.032Z] Copying: 512/512 [B] (average 500 kBps) 00:06:10.729 00:06:10.729 03:08:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ c8c4lxkhljzv70e4najqkec9fg9gmp1l2oxqln0nvkprn32fyerqbze7fnkvcffd83m1zbkvrjeimedx59jpc60mnehh7m50jytvjoiz21jghwjb69jbrqmcr91phtlovyx0dib0v73v1gthzv2igey5t320riu0e146a357atc08xcsq3d0j0rdynsiaonl9yz9qr64rra4dyif46ugvq0dc4c6hw26w8ejiyx3yqzs2pbsqwy6x1ji6mzyb1ccrehafnetnyq4spdj3gg667n8r85oc5mrqzw3a33tl92bole71e7omnwk94pqutj1qmhrwesmuslu7jr4npgeujaj998fit4k40eg2gmnbz71a686x0fauv0xg072rotl9my58jy3qmivxfoyzjql2821578nu9qasv66pfkzffb4vvycafwj6hegghd1ij9i4txp7a8c8zkf3ziq0d53otgw2riux3l6ls89yp3ox1u3dwc9hta6xgbfypi0xv3o == \c\8\c\4\l\x\k\h\l\j\z\v\7\0\e\4\n\a\j\q\k\e\c\9\f\g\9\g\m\p\1\l\2\o\x\q\l\n\0\n\v\k\p\r\n\3\2\f\y\e\r\q\b\z\e\7\f\n\k\v\c\f\f\d\8\3\m\1\z\b\k\v\r\j\e\i\m\e\d\x\5\9\j\p\c\6\0\m\n\e\h\h\7\m\5\0\j\y\t\v\j\o\i\z\2\1\j\g\h\w\j\b\6\9\j\b\r\q\m\c\r\9\1\p\h\t\l\o\v\y\x\0\d\i\b\0\v\7\3\v\1\g\t\h\z\v\2\i\g\e\y\5\t\3\2\0\r\i\u\0\e\1\4\6\a\3\5\7\a\t\c\0\8\x\c\s\q\3\d\0\j\0\r\d\y\n\s\i\a\o\n\l\9\y\z\9\q\r\6\4\r\r\a\4\d\y\i\f\4\6\u\g\v\q\0\d\c\4\c\6\h\w\2\6\w\8\e\j\i\y\x\3\y\q\z\s\2\p\b\s\q\w\y\6\x\1\j\i\6\m\z\y\b\1\c\c\r\e\h\a\f\n\e\t\n\y\q\4\s\p\d\j\3\g\g\6\6\7\n\8\r\8\5\o\c\5\m\r\q\z\w\3\a\3\3\t\l\9\2\b\o\l\e\7\1\e\7\o\m\n\w\k\9\4\p\q\u\t\j\1\q\m\h\r\w\e\s\m\u\s\l\u\7\j\r\4\n\p\g\e\u\j\a\j\9\9\8\f\i\t\4\k\4\0\e\g\2\g\m\n\b\z\7\1\a\6\8\6\x\0\f\a\u\v\0\x\g\0\7\2\r\o\t\l\9\m\y\5\8\j\y\3\q\m\i\v\x\f\o\y\z\j\q\l\2\8\2\1\5\7\8\n\u\9\q\a\s\v\6\6\p\f\k\z\f\f\b\4\v\v\y\c\a\f\w\j\6\h\e\g\g\h\d\1\i\j\9\i\4\t\x\p\7\a\8\c\8\z\k\f\3\z\i\q\0\d\5\3\o\t\g\w\2\r\i\u\x\3\l\6\l\s\8\9\y\p\3\o\x\1\u\3\d\w\c\9\h\t\a\6\x\g\b\f\y\p\i\0\x\v\3\o ]] 00:06:10.729 03:08:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:10.729 03:08:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:10.729 [2024-10-09 03:08:53.960138] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:10.729 [2024-10-09 03:08:53.960234] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60402 ] 00:06:10.988 [2024-10-09 03:08:54.095757] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.988 [2024-10-09 03:08:54.192739] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.988 [2024-10-09 03:08:54.260102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.247  [2024-10-09T03:08:54.809Z] Copying: 512/512 [B] (average 500 kBps) 00:06:11.506 00:06:11.506 03:08:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ c8c4lxkhljzv70e4najqkec9fg9gmp1l2oxqln0nvkprn32fyerqbze7fnkvcffd83m1zbkvrjeimedx59jpc60mnehh7m50jytvjoiz21jghwjb69jbrqmcr91phtlovyx0dib0v73v1gthzv2igey5t320riu0e146a357atc08xcsq3d0j0rdynsiaonl9yz9qr64rra4dyif46ugvq0dc4c6hw26w8ejiyx3yqzs2pbsqwy6x1ji6mzyb1ccrehafnetnyq4spdj3gg667n8r85oc5mrqzw3a33tl92bole71e7omnwk94pqutj1qmhrwesmuslu7jr4npgeujaj998fit4k40eg2gmnbz71a686x0fauv0xg072rotl9my58jy3qmivxfoyzjql2821578nu9qasv66pfkzffb4vvycafwj6hegghd1ij9i4txp7a8c8zkf3ziq0d53otgw2riux3l6ls89yp3ox1u3dwc9hta6xgbfypi0xv3o == \c\8\c\4\l\x\k\h\l\j\z\v\7\0\e\4\n\a\j\q\k\e\c\9\f\g\9\g\m\p\1\l\2\o\x\q\l\n\0\n\v\k\p\r\n\3\2\f\y\e\r\q\b\z\e\7\f\n\k\v\c\f\f\d\8\3\m\1\z\b\k\v\r\j\e\i\m\e\d\x\5\9\j\p\c\6\0\m\n\e\h\h\7\m\5\0\j\y\t\v\j\o\i\z\2\1\j\g\h\w\j\b\6\9\j\b\r\q\m\c\r\9\1\p\h\t\l\o\v\y\x\0\d\i\b\0\v\7\3\v\1\g\t\h\z\v\2\i\g\e\y\5\t\3\2\0\r\i\u\0\e\1\4\6\a\3\5\7\a\t\c\0\8\x\c\s\q\3\d\0\j\0\r\d\y\n\s\i\a\o\n\l\9\y\z\9\q\r\6\4\r\r\a\4\d\y\i\f\4\6\u\g\v\q\0\d\c\4\c\6\h\w\2\6\w\8\e\j\i\y\x\3\y\q\z\s\2\p\b\s\q\w\y\6\x\1\j\i\6\m\z\y\b\1\c\c\r\e\h\a\f\n\e\t\n\y\q\4\s\p\d\j\3\g\g\6\6\7\n\8\r\8\5\o\c\5\m\r\q\z\w\3\a\3\3\t\l\9\2\b\o\l\e\7\1\e\7\o\m\n\w\k\9\4\p\q\u\t\j\1\q\m\h\r\w\e\s\m\u\s\l\u\7\j\r\4\n\p\g\e\u\j\a\j\9\9\8\f\i\t\4\k\4\0\e\g\2\g\m\n\b\z\7\1\a\6\8\6\x\0\f\a\u\v\0\x\g\0\7\2\r\o\t\l\9\m\y\5\8\j\y\3\q\m\i\v\x\f\o\y\z\j\q\l\2\8\2\1\5\7\8\n\u\9\q\a\s\v\6\6\p\f\k\z\f\f\b\4\v\v\y\c\a\f\w\j\6\h\e\g\g\h\d\1\i\j\9\i\4\t\x\p\7\a\8\c\8\z\k\f\3\z\i\q\0\d\5\3\o\t\g\w\2\r\i\u\x\3\l\6\l\s\8\9\y\p\3\o\x\1\u\3\d\w\c\9\h\t\a\6\x\g\b\f\y\p\i\0\x\v\3\o ]] 00:06:11.506 03:08:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:11.506 03:08:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:11.506 [2024-10-09 03:08:54.647774] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:11.506 [2024-10-09 03:08:54.647894] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60412 ] 00:06:11.506 [2024-10-09 03:08:54.781676] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.765 [2024-10-09 03:08:54.873505] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.765 [2024-10-09 03:08:54.941550] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.765  [2024-10-09T03:08:55.327Z] Copying: 512/512 [B] (average 500 kBps) 00:06:12.024 00:06:12.024 03:08:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ c8c4lxkhljzv70e4najqkec9fg9gmp1l2oxqln0nvkprn32fyerqbze7fnkvcffd83m1zbkvrjeimedx59jpc60mnehh7m50jytvjoiz21jghwjb69jbrqmcr91phtlovyx0dib0v73v1gthzv2igey5t320riu0e146a357atc08xcsq3d0j0rdynsiaonl9yz9qr64rra4dyif46ugvq0dc4c6hw26w8ejiyx3yqzs2pbsqwy6x1ji6mzyb1ccrehafnetnyq4spdj3gg667n8r85oc5mrqzw3a33tl92bole71e7omnwk94pqutj1qmhrwesmuslu7jr4npgeujaj998fit4k40eg2gmnbz71a686x0fauv0xg072rotl9my58jy3qmivxfoyzjql2821578nu9qasv66pfkzffb4vvycafwj6hegghd1ij9i4txp7a8c8zkf3ziq0d53otgw2riux3l6ls89yp3ox1u3dwc9hta6xgbfypi0xv3o == \c\8\c\4\l\x\k\h\l\j\z\v\7\0\e\4\n\a\j\q\k\e\c\9\f\g\9\g\m\p\1\l\2\o\x\q\l\n\0\n\v\k\p\r\n\3\2\f\y\e\r\q\b\z\e\7\f\n\k\v\c\f\f\d\8\3\m\1\z\b\k\v\r\j\e\i\m\e\d\x\5\9\j\p\c\6\0\m\n\e\h\h\7\m\5\0\j\y\t\v\j\o\i\z\2\1\j\g\h\w\j\b\6\9\j\b\r\q\m\c\r\9\1\p\h\t\l\o\v\y\x\0\d\i\b\0\v\7\3\v\1\g\t\h\z\v\2\i\g\e\y\5\t\3\2\0\r\i\u\0\e\1\4\6\a\3\5\7\a\t\c\0\8\x\c\s\q\3\d\0\j\0\r\d\y\n\s\i\a\o\n\l\9\y\z\9\q\r\6\4\r\r\a\4\d\y\i\f\4\6\u\g\v\q\0\d\c\4\c\6\h\w\2\6\w\8\e\j\i\y\x\3\y\q\z\s\2\p\b\s\q\w\y\6\x\1\j\i\6\m\z\y\b\1\c\c\r\e\h\a\f\n\e\t\n\y\q\4\s\p\d\j\3\g\g\6\6\7\n\8\r\8\5\o\c\5\m\r\q\z\w\3\a\3\3\t\l\9\2\b\o\l\e\7\1\e\7\o\m\n\w\k\9\4\p\q\u\t\j\1\q\m\h\r\w\e\s\m\u\s\l\u\7\j\r\4\n\p\g\e\u\j\a\j\9\9\8\f\i\t\4\k\4\0\e\g\2\g\m\n\b\z\7\1\a\6\8\6\x\0\f\a\u\v\0\x\g\0\7\2\r\o\t\l\9\m\y\5\8\j\y\3\q\m\i\v\x\f\o\y\z\j\q\l\2\8\2\1\5\7\8\n\u\9\q\a\s\v\6\6\p\f\k\z\f\f\b\4\v\v\y\c\a\f\w\j\6\h\e\g\g\h\d\1\i\j\9\i\4\t\x\p\7\a\8\c\8\z\k\f\3\z\i\q\0\d\5\3\o\t\g\w\2\r\i\u\x\3\l\6\l\s\8\9\y\p\3\o\x\1\u\3\d\w\c\9\h\t\a\6\x\g\b\f\y\p\i\0\x\v\3\o ]] 00:06:12.024 03:08:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:12.024 03:08:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:12.024 [2024-10-09 03:08:55.319092] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:12.024 [2024-10-09 03:08:55.319177] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60422 ] 00:06:12.283 [2024-10-09 03:08:55.454844] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.283 [2024-10-09 03:08:55.546991] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.543 [2024-10-09 03:08:55.616275] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.543  [2024-10-09T03:08:56.104Z] Copying: 512/512 [B] (average 125 kBps) 00:06:12.802 00:06:12.802 ************************************ 00:06:12.802 END TEST dd_flags_misc 00:06:12.802 ************************************ 00:06:12.802 03:08:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ c8c4lxkhljzv70e4najqkec9fg9gmp1l2oxqln0nvkprn32fyerqbze7fnkvcffd83m1zbkvrjeimedx59jpc60mnehh7m50jytvjoiz21jghwjb69jbrqmcr91phtlovyx0dib0v73v1gthzv2igey5t320riu0e146a357atc08xcsq3d0j0rdynsiaonl9yz9qr64rra4dyif46ugvq0dc4c6hw26w8ejiyx3yqzs2pbsqwy6x1ji6mzyb1ccrehafnetnyq4spdj3gg667n8r85oc5mrqzw3a33tl92bole71e7omnwk94pqutj1qmhrwesmuslu7jr4npgeujaj998fit4k40eg2gmnbz71a686x0fauv0xg072rotl9my58jy3qmivxfoyzjql2821578nu9qasv66pfkzffb4vvycafwj6hegghd1ij9i4txp7a8c8zkf3ziq0d53otgw2riux3l6ls89yp3ox1u3dwc9hta6xgbfypi0xv3o == \c\8\c\4\l\x\k\h\l\j\z\v\7\0\e\4\n\a\j\q\k\e\c\9\f\g\9\g\m\p\1\l\2\o\x\q\l\n\0\n\v\k\p\r\n\3\2\f\y\e\r\q\b\z\e\7\f\n\k\v\c\f\f\d\8\3\m\1\z\b\k\v\r\j\e\i\m\e\d\x\5\9\j\p\c\6\0\m\n\e\h\h\7\m\5\0\j\y\t\v\j\o\i\z\2\1\j\g\h\w\j\b\6\9\j\b\r\q\m\c\r\9\1\p\h\t\l\o\v\y\x\0\d\i\b\0\v\7\3\v\1\g\t\h\z\v\2\i\g\e\y\5\t\3\2\0\r\i\u\0\e\1\4\6\a\3\5\7\a\t\c\0\8\x\c\s\q\3\d\0\j\0\r\d\y\n\s\i\a\o\n\l\9\y\z\9\q\r\6\4\r\r\a\4\d\y\i\f\4\6\u\g\v\q\0\d\c\4\c\6\h\w\2\6\w\8\e\j\i\y\x\3\y\q\z\s\2\p\b\s\q\w\y\6\x\1\j\i\6\m\z\y\b\1\c\c\r\e\h\a\f\n\e\t\n\y\q\4\s\p\d\j\3\g\g\6\6\7\n\8\r\8\5\o\c\5\m\r\q\z\w\3\a\3\3\t\l\9\2\b\o\l\e\7\1\e\7\o\m\n\w\k\9\4\p\q\u\t\j\1\q\m\h\r\w\e\s\m\u\s\l\u\7\j\r\4\n\p\g\e\u\j\a\j\9\9\8\f\i\t\4\k\4\0\e\g\2\g\m\n\b\z\7\1\a\6\8\6\x\0\f\a\u\v\0\x\g\0\7\2\r\o\t\l\9\m\y\5\8\j\y\3\q\m\i\v\x\f\o\y\z\j\q\l\2\8\2\1\5\7\8\n\u\9\q\a\s\v\6\6\p\f\k\z\f\f\b\4\v\v\y\c\a\f\w\j\6\h\e\g\g\h\d\1\i\j\9\i\4\t\x\p\7\a\8\c\8\z\k\f\3\z\i\q\0\d\5\3\o\t\g\w\2\r\i\u\x\3\l\6\l\s\8\9\y\p\3\o\x\1\u\3\d\w\c\9\h\t\a\6\x\g\b\f\y\p\i\0\x\v\3\o ]] 00:06:12.802 00:06:12.802 real 0m5.323s 00:06:12.802 user 0m3.085s 00:06:12.802 sys 0m2.743s 00:06:12.802 03:08:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.802 03:08:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:12.802 03:08:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:12.802 03:08:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:12.802 * Second test run, disabling liburing, forcing AIO 00:06:12.802 03:08:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:12.802 03:08:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:12.802 03:08:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.802 03:08:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.802 03:08:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:12.802 ************************************ 00:06:12.802 START TEST dd_flag_append_forced_aio 00:06:12.802 ************************************ 00:06:12.802 03:08:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1125 -- # append 00:06:12.802 03:08:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:12.802 03:08:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:12.802 03:08:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:12.802 03:08:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:12.802 03:08:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:12.802 03:08:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=0flzrxp3vh1qedlpb9x5s1j35h8uajbo 00:06:12.802 03:08:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:12.802 03:08:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:12.802 03:08:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:12.802 03:08:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=sss5lxtb8d75jc0s0bj1mlp6biybmbah 00:06:12.802 03:08:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s 0flzrxp3vh1qedlpb9x5s1j35h8uajbo 00:06:12.802 03:08:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s sss5lxtb8d75jc0s0bj1mlp6biybmbah 00:06:12.802 03:08:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:12.802 [2024-10-09 03:08:56.055884] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:12.802 [2024-10-09 03:08:56.056003] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60456 ] 00:06:13.061 [2024-10-09 03:08:56.191555] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.061 [2024-10-09 03:08:56.285804] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.061 [2024-10-09 03:08:56.355733] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.320  [2024-10-09T03:08:56.882Z] Copying: 32/32 [B] (average 31 kBps) 00:06:13.579 00:06:13.579 ************************************ 00:06:13.579 END TEST dd_flag_append_forced_aio 00:06:13.579 ************************************ 00:06:13.579 03:08:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ sss5lxtb8d75jc0s0bj1mlp6biybmbah0flzrxp3vh1qedlpb9x5s1j35h8uajbo == \s\s\s\5\l\x\t\b\8\d\7\5\j\c\0\s\0\b\j\1\m\l\p\6\b\i\y\b\m\b\a\h\0\f\l\z\r\x\p\3\v\h\1\q\e\d\l\p\b\9\x\5\s\1\j\3\5\h\8\u\a\j\b\o ]] 00:06:13.579 00:06:13.579 real 0m0.729s 00:06:13.579 user 0m0.413s 00:06:13.579 sys 0m0.193s 00:06:13.579 03:08:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.579 03:08:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:13.579 03:08:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:13.579 03:08:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.579 03:08:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.579 03:08:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:13.579 ************************************ 00:06:13.579 START TEST dd_flag_directory_forced_aio 00:06:13.579 ************************************ 00:06:13.579 03:08:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1125 -- # directory 00:06:13.579 03:08:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:13.579 03:08:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:13.579 03:08:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:13.579 03:08:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.579 03:08:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.579 03:08:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.579 03:08:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.579 03:08:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.579 03:08:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.579 03:08:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.579 03:08:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:13.579 03:08:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:13.579 [2024-10-09 03:08:56.833874] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:13.579 [2024-10-09 03:08:56.834015] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60482 ] 00:06:13.838 [2024-10-09 03:08:56.969884] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.838 [2024-10-09 03:08:57.061344] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.838 [2024-10-09 03:08:57.130918] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.098 [2024-10-09 03:08:57.176347] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:14.098 [2024-10-09 03:08:57.176408] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:14.098 [2024-10-09 03:08:57.176437] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:14.098 [2024-10-09 03:08:57.335241] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:14.371 03:08:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:06:14.371 03:08:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:14.371 03:08:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:06:14.371 03:08:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:14.371 03:08:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:14.371 03:08:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:14.371 03:08:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:14.371 03:08:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:14.371 03:08:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:14.371 03:08:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.371 03:08:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.371 03:08:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.371 03:08:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.371 03:08:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.371 03:08:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.371 03:08:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.371 03:08:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:14.371 03:08:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:14.371 [2024-10-09 03:08:57.502835] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:14.371 [2024-10-09 03:08:57.502933] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60497 ] 00:06:14.371 [2024-10-09 03:08:57.639275] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.659 [2024-10-09 03:08:57.741746] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.659 [2024-10-09 03:08:57.811510] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.659 [2024-10-09 03:08:57.856271] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:14.659 [2024-10-09 03:08:57.856334] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:14.659 [2024-10-09 03:08:57.856348] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:14.918 [2024-10-09 03:08:58.011851] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:14.918 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:06:14.918 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:14.918 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:06:14.918 ************************************ 00:06:14.918 END TEST dd_flag_directory_forced_aio 00:06:14.918 ************************************ 00:06:14.918 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:14.918 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:14.918 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:14.918 00:06:14.918 real 0m1.363s 00:06:14.918 user 0m0.794s 00:06:14.918 sys 0m0.356s 00:06:14.918 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.918 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:14.918 03:08:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:14.918 03:08:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.918 03:08:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.918 03:08:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:14.918 ************************************ 00:06:14.918 START TEST dd_flag_nofollow_forced_aio 00:06:14.918 ************************************ 00:06:14.918 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1125 -- # nofollow 00:06:14.918 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:14.918 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:14.918 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:14.918 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:14.918 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:14.918 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:14.918 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:14.918 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.918 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.918 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.918 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.918 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.918 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.918 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.918 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:14.918 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:15.177 [2024-10-09 03:08:58.254442] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:15.177 [2024-10-09 03:08:58.254541] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60526 ] 00:06:15.177 [2024-10-09 03:08:58.391462] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.436 [2024-10-09 03:08:58.514081] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.436 [2024-10-09 03:08:58.571753] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.436 [2024-10-09 03:08:58.610375] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:15.436 [2024-10-09 03:08:58.610448] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:15.436 [2024-10-09 03:08:58.610468] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:15.436 [2024-10-09 03:08:58.728339] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:15.695 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:06:15.695 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:15.695 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:06:15.695 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:15.695 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:15.695 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:15.695 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:15.695 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:15.695 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:15.695 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.695 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.695 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.695 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.695 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.695 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.695 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.695 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:15.695 03:08:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:15.695 [2024-10-09 03:08:58.893225] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:15.695 [2024-10-09 03:08:58.893326] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60535 ] 00:06:15.954 [2024-10-09 03:08:59.029902] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.954 [2024-10-09 03:08:59.118665] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.954 [2024-10-09 03:08:59.171534] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.954 [2024-10-09 03:08:59.207340] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:15.954 [2024-10-09 03:08:59.207397] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:15.954 [2024-10-09 03:08:59.207428] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:16.213 [2024-10-09 03:08:59.325197] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:16.213 03:08:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:06:16.213 03:08:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:16.213 03:08:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:06:16.213 03:08:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:16.213 03:08:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:16.213 03:08:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:16.213 03:08:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:16.213 03:08:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:16.213 03:08:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:16.213 03:08:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:16.213 [2024-10-09 03:08:59.473937] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:16.213 [2024-10-09 03:08:59.474031] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60543 ] 00:06:16.472 [2024-10-09 03:08:59.608135] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.472 [2024-10-09 03:08:59.711371] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.472 [2024-10-09 03:08:59.766865] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.731  [2024-10-09T03:09:00.292Z] Copying: 512/512 [B] (average 500 kBps) 00:06:16.989 00:06:16.989 03:09:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ j04wfl57r3v2oabg588aemzpe346rycwk92iwm3hfbb9i4yft90ldfyc1gibdkpey5b1z2ud8tl8ixrcts2ga7vi7m4q0p2epk7bjzo1v7i5506lp16o8s3lrra6189d4n05bu8oi9dqml8f5ig7bj2obrrpgke87oc6wsippw4ofmjn23j5zo67npf4klw8v4gtem3bxo7gpdapplluin9ria2xix4na713wcxxo5zdllfqakiiijj46affyjt97dd1dc42q0r4ydjg47xdojohpk03fciw9t05fdtrkn474myyik452aftuams3ak9uwd9qloo3iman16037tb5g9fsvgsgtxenq03ywko0sguxj3krw64bpz3rd4gd2tmji7g9nabpo853smdq4ov8ijeb83hu2l5axx6d1ibxo136vnkjkxt0aaxvidlrp353edzri6gh2ky3cve5okyygvvdyfl3tsl5mh27r16k4zt1japhhs4w6lbv0xblu7f == \j\0\4\w\f\l\5\7\r\3\v\2\o\a\b\g\5\8\8\a\e\m\z\p\e\3\4\6\r\y\c\w\k\9\2\i\w\m\3\h\f\b\b\9\i\4\y\f\t\9\0\l\d\f\y\c\1\g\i\b\d\k\p\e\y\5\b\1\z\2\u\d\8\t\l\8\i\x\r\c\t\s\2\g\a\7\v\i\7\m\4\q\0\p\2\e\p\k\7\b\j\z\o\1\v\7\i\5\5\0\6\l\p\1\6\o\8\s\3\l\r\r\a\6\1\8\9\d\4\n\0\5\b\u\8\o\i\9\d\q\m\l\8\f\5\i\g\7\b\j\2\o\b\r\r\p\g\k\e\8\7\o\c\6\w\s\i\p\p\w\4\o\f\m\j\n\2\3\j\5\z\o\6\7\n\p\f\4\k\l\w\8\v\4\g\t\e\m\3\b\x\o\7\g\p\d\a\p\p\l\l\u\i\n\9\r\i\a\2\x\i\x\4\n\a\7\1\3\w\c\x\x\o\5\z\d\l\l\f\q\a\k\i\i\i\j\j\4\6\a\f\f\y\j\t\9\7\d\d\1\d\c\4\2\q\0\r\4\y\d\j\g\4\7\x\d\o\j\o\h\p\k\0\3\f\c\i\w\9\t\0\5\f\d\t\r\k\n\4\7\4\m\y\y\i\k\4\5\2\a\f\t\u\a\m\s\3\a\k\9\u\w\d\9\q\l\o\o\3\i\m\a\n\1\6\0\3\7\t\b\5\g\9\f\s\v\g\s\g\t\x\e\n\q\0\3\y\w\k\o\0\s\g\u\x\j\3\k\r\w\6\4\b\p\z\3\r\d\4\g\d\2\t\m\j\i\7\g\9\n\a\b\p\o\8\5\3\s\m\d\q\4\o\v\8\i\j\e\b\8\3\h\u\2\l\5\a\x\x\6\d\1\i\b\x\o\1\3\6\v\n\k\j\k\x\t\0\a\a\x\v\i\d\l\r\p\3\5\3\e\d\z\r\i\6\g\h\2\k\y\3\c\v\e\5\o\k\y\y\g\v\v\d\y\f\l\3\t\s\l\5\m\h\2\7\r\1\6\k\4\z\t\1\j\a\p\h\h\s\4\w\6\l\b\v\0\x\b\l\u\7\f ]] 00:06:16.989 00:06:16.989 real 0m1.884s 00:06:16.989 user 0m1.076s 00:06:16.989 sys 0m0.473s 00:06:16.989 03:09:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.989 ************************************ 00:06:16.989 END TEST dd_flag_nofollow_forced_aio 00:06:16.989 ************************************ 00:06:16.989 03:09:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:16.989 03:09:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:16.989 03:09:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.989 03:09:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.989 03:09:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:16.989 ************************************ 00:06:16.989 START TEST dd_flag_noatime_forced_aio 00:06:16.989 ************************************ 00:06:16.989 03:09:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1125 -- # noatime 00:06:16.989 03:09:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:16.989 03:09:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:16.989 03:09:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:16.989 03:09:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:16.989 03:09:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:16.989 03:09:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:16.989 03:09:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1728443339 00:06:16.989 03:09:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:16.989 03:09:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1728443340 00:06:16.989 03:09:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:06:17.925 03:09:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:17.925 [2024-10-09 03:09:01.211681] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:17.925 [2024-10-09 03:09:01.211794] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60589 ] 00:06:18.184 [2024-10-09 03:09:01.353299] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.184 [2024-10-09 03:09:01.463675] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.443 [2024-10-09 03:09:01.518849] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:18.443  [2024-10-09T03:09:02.004Z] Copying: 512/512 [B] (average 500 kBps) 00:06:18.701 00:06:18.701 03:09:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:18.701 03:09:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1728443339 )) 00:06:18.701 03:09:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:18.701 03:09:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1728443340 )) 00:06:18.701 03:09:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:18.701 [2024-10-09 03:09:01.865165] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:18.701 [2024-10-09 03:09:01.865252] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60600 ] 00:06:18.701 [2024-10-09 03:09:02.002370] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.960 [2024-10-09 03:09:02.098934] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.960 [2024-10-09 03:09:02.154901] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:18.960  [2024-10-09T03:09:02.523Z] Copying: 512/512 [B] (average 500 kBps) 00:06:19.220 00:06:19.220 03:09:02 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:19.220 ************************************ 00:06:19.220 END TEST dd_flag_noatime_forced_aio 00:06:19.220 ************************************ 00:06:19.220 03:09:02 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1728443342 )) 00:06:19.220 00:06:19.220 real 0m2.311s 00:06:19.220 user 0m0.735s 00:06:19.220 sys 0m0.335s 00:06:19.220 03:09:02 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.220 03:09:02 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:19.220 03:09:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:19.220 03:09:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.220 03:09:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.220 03:09:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:19.220 ************************************ 00:06:19.220 START TEST dd_flags_misc_forced_aio 00:06:19.220 ************************************ 00:06:19.220 03:09:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1125 -- # io 00:06:19.220 03:09:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:19.220 03:09:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:19.220 03:09:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:19.220 03:09:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:19.220 03:09:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:19.220 03:09:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:19.220 03:09:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:19.220 03:09:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:19.220 03:09:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:19.479 [2024-10-09 03:09:02.561249] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:19.479 [2024-10-09 03:09:02.561346] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60627 ] 00:06:19.479 [2024-10-09 03:09:02.699933] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.738 [2024-10-09 03:09:02.797986] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.738 [2024-10-09 03:09:02.852885] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:19.738  [2024-10-09T03:09:03.300Z] Copying: 512/512 [B] (average 500 kBps) 00:06:19.997 00:06:19.997 03:09:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ p7oscgn6tv4qb58b4f2124dru7ms0momxch2ao58ld65tvz6d8hkj2r0fqw01o78r9eglnm4qweqyib2qcs7njt7t1ypqmhsd5whktereg1hcf07u8fwl6j9ucb2rpubwwlj5qb5nqdnd1jxzgd8u8bpo4hdytzzb28xf50zw9rqt2n7a4s0tgeoprfh2rniwrqmjq1s8sn0hwozr4641ulwbwdhsh7ismwlat5wl6vudke0shp41nhmiew84zx8pv0qjcydvo01yobsufbjvttleggdt7i52iwgcwjdc7zsgnosd6lozswyr8r6jq7k6ndmntabrrr0s1fdenea84sipfuknggql98kbp9kp3mftb65oglvoxr8v0nk89u3e98no11q3kuzop0dv1gu41poxgxa65lid6tufomgzv127vf0nxg73xcv1ysje2uxbmu9uxbccm8z8gdplu8g7cx8wcidp4p3yp87f75anju1stq0hwflv9sc0eic2uif == \p\7\o\s\c\g\n\6\t\v\4\q\b\5\8\b\4\f\2\1\2\4\d\r\u\7\m\s\0\m\o\m\x\c\h\2\a\o\5\8\l\d\6\5\t\v\z\6\d\8\h\k\j\2\r\0\f\q\w\0\1\o\7\8\r\9\e\g\l\n\m\4\q\w\e\q\y\i\b\2\q\c\s\7\n\j\t\7\t\1\y\p\q\m\h\s\d\5\w\h\k\t\e\r\e\g\1\h\c\f\0\7\u\8\f\w\l\6\j\9\u\c\b\2\r\p\u\b\w\w\l\j\5\q\b\5\n\q\d\n\d\1\j\x\z\g\d\8\u\8\b\p\o\4\h\d\y\t\z\z\b\2\8\x\f\5\0\z\w\9\r\q\t\2\n\7\a\4\s\0\t\g\e\o\p\r\f\h\2\r\n\i\w\r\q\m\j\q\1\s\8\s\n\0\h\w\o\z\r\4\6\4\1\u\l\w\b\w\d\h\s\h\7\i\s\m\w\l\a\t\5\w\l\6\v\u\d\k\e\0\s\h\p\4\1\n\h\m\i\e\w\8\4\z\x\8\p\v\0\q\j\c\y\d\v\o\0\1\y\o\b\s\u\f\b\j\v\t\t\l\e\g\g\d\t\7\i\5\2\i\w\g\c\w\j\d\c\7\z\s\g\n\o\s\d\6\l\o\z\s\w\y\r\8\r\6\j\q\7\k\6\n\d\m\n\t\a\b\r\r\r\0\s\1\f\d\e\n\e\a\8\4\s\i\p\f\u\k\n\g\g\q\l\9\8\k\b\p\9\k\p\3\m\f\t\b\6\5\o\g\l\v\o\x\r\8\v\0\n\k\8\9\u\3\e\9\8\n\o\1\1\q\3\k\u\z\o\p\0\d\v\1\g\u\4\1\p\o\x\g\x\a\6\5\l\i\d\6\t\u\f\o\m\g\z\v\1\2\7\v\f\0\n\x\g\7\3\x\c\v\1\y\s\j\e\2\u\x\b\m\u\9\u\x\b\c\c\m\8\z\8\g\d\p\l\u\8\g\7\c\x\8\w\c\i\d\p\4\p\3\y\p\8\7\f\7\5\a\n\j\u\1\s\t\q\0\h\w\f\l\v\9\s\c\0\e\i\c\2\u\i\f ]] 00:06:19.997 03:09:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:19.997 03:09:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:19.997 [2024-10-09 03:09:03.174897] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:19.997 [2024-10-09 03:09:03.175175] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60640 ] 00:06:20.256 [2024-10-09 03:09:03.303340] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.256 [2024-10-09 03:09:03.409300] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.256 [2024-10-09 03:09:03.464983] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:20.256  [2024-10-09T03:09:03.818Z] Copying: 512/512 [B] (average 500 kBps) 00:06:20.515 00:06:20.515 03:09:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ p7oscgn6tv4qb58b4f2124dru7ms0momxch2ao58ld65tvz6d8hkj2r0fqw01o78r9eglnm4qweqyib2qcs7njt7t1ypqmhsd5whktereg1hcf07u8fwl6j9ucb2rpubwwlj5qb5nqdnd1jxzgd8u8bpo4hdytzzb28xf50zw9rqt2n7a4s0tgeoprfh2rniwrqmjq1s8sn0hwozr4641ulwbwdhsh7ismwlat5wl6vudke0shp41nhmiew84zx8pv0qjcydvo01yobsufbjvttleggdt7i52iwgcwjdc7zsgnosd6lozswyr8r6jq7k6ndmntabrrr0s1fdenea84sipfuknggql98kbp9kp3mftb65oglvoxr8v0nk89u3e98no11q3kuzop0dv1gu41poxgxa65lid6tufomgzv127vf0nxg73xcv1ysje2uxbmu9uxbccm8z8gdplu8g7cx8wcidp4p3yp87f75anju1stq0hwflv9sc0eic2uif == \p\7\o\s\c\g\n\6\t\v\4\q\b\5\8\b\4\f\2\1\2\4\d\r\u\7\m\s\0\m\o\m\x\c\h\2\a\o\5\8\l\d\6\5\t\v\z\6\d\8\h\k\j\2\r\0\f\q\w\0\1\o\7\8\r\9\e\g\l\n\m\4\q\w\e\q\y\i\b\2\q\c\s\7\n\j\t\7\t\1\y\p\q\m\h\s\d\5\w\h\k\t\e\r\e\g\1\h\c\f\0\7\u\8\f\w\l\6\j\9\u\c\b\2\r\p\u\b\w\w\l\j\5\q\b\5\n\q\d\n\d\1\j\x\z\g\d\8\u\8\b\p\o\4\h\d\y\t\z\z\b\2\8\x\f\5\0\z\w\9\r\q\t\2\n\7\a\4\s\0\t\g\e\o\p\r\f\h\2\r\n\i\w\r\q\m\j\q\1\s\8\s\n\0\h\w\o\z\r\4\6\4\1\u\l\w\b\w\d\h\s\h\7\i\s\m\w\l\a\t\5\w\l\6\v\u\d\k\e\0\s\h\p\4\1\n\h\m\i\e\w\8\4\z\x\8\p\v\0\q\j\c\y\d\v\o\0\1\y\o\b\s\u\f\b\j\v\t\t\l\e\g\g\d\t\7\i\5\2\i\w\g\c\w\j\d\c\7\z\s\g\n\o\s\d\6\l\o\z\s\w\y\r\8\r\6\j\q\7\k\6\n\d\m\n\t\a\b\r\r\r\0\s\1\f\d\e\n\e\a\8\4\s\i\p\f\u\k\n\g\g\q\l\9\8\k\b\p\9\k\p\3\m\f\t\b\6\5\o\g\l\v\o\x\r\8\v\0\n\k\8\9\u\3\e\9\8\n\o\1\1\q\3\k\u\z\o\p\0\d\v\1\g\u\4\1\p\o\x\g\x\a\6\5\l\i\d\6\t\u\f\o\m\g\z\v\1\2\7\v\f\0\n\x\g\7\3\x\c\v\1\y\s\j\e\2\u\x\b\m\u\9\u\x\b\c\c\m\8\z\8\g\d\p\l\u\8\g\7\c\x\8\w\c\i\d\p\4\p\3\y\p\8\7\f\7\5\a\n\j\u\1\s\t\q\0\h\w\f\l\v\9\s\c\0\e\i\c\2\u\i\f ]] 00:06:20.515 03:09:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:20.515 03:09:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:20.515 [2024-10-09 03:09:03.779874] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:20.515 [2024-10-09 03:09:03.779969] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60646 ] 00:06:20.775 [2024-10-09 03:09:03.909198] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.775 [2024-10-09 03:09:03.995295] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.775 [2024-10-09 03:09:04.047794] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.033  [2024-10-09T03:09:04.336Z] Copying: 512/512 [B] (average 125 kBps) 00:06:21.033 00:06:21.293 03:09:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ p7oscgn6tv4qb58b4f2124dru7ms0momxch2ao58ld65tvz6d8hkj2r0fqw01o78r9eglnm4qweqyib2qcs7njt7t1ypqmhsd5whktereg1hcf07u8fwl6j9ucb2rpubwwlj5qb5nqdnd1jxzgd8u8bpo4hdytzzb28xf50zw9rqt2n7a4s0tgeoprfh2rniwrqmjq1s8sn0hwozr4641ulwbwdhsh7ismwlat5wl6vudke0shp41nhmiew84zx8pv0qjcydvo01yobsufbjvttleggdt7i52iwgcwjdc7zsgnosd6lozswyr8r6jq7k6ndmntabrrr0s1fdenea84sipfuknggql98kbp9kp3mftb65oglvoxr8v0nk89u3e98no11q3kuzop0dv1gu41poxgxa65lid6tufomgzv127vf0nxg73xcv1ysje2uxbmu9uxbccm8z8gdplu8g7cx8wcidp4p3yp87f75anju1stq0hwflv9sc0eic2uif == \p\7\o\s\c\g\n\6\t\v\4\q\b\5\8\b\4\f\2\1\2\4\d\r\u\7\m\s\0\m\o\m\x\c\h\2\a\o\5\8\l\d\6\5\t\v\z\6\d\8\h\k\j\2\r\0\f\q\w\0\1\o\7\8\r\9\e\g\l\n\m\4\q\w\e\q\y\i\b\2\q\c\s\7\n\j\t\7\t\1\y\p\q\m\h\s\d\5\w\h\k\t\e\r\e\g\1\h\c\f\0\7\u\8\f\w\l\6\j\9\u\c\b\2\r\p\u\b\w\w\l\j\5\q\b\5\n\q\d\n\d\1\j\x\z\g\d\8\u\8\b\p\o\4\h\d\y\t\z\z\b\2\8\x\f\5\0\z\w\9\r\q\t\2\n\7\a\4\s\0\t\g\e\o\p\r\f\h\2\r\n\i\w\r\q\m\j\q\1\s\8\s\n\0\h\w\o\z\r\4\6\4\1\u\l\w\b\w\d\h\s\h\7\i\s\m\w\l\a\t\5\w\l\6\v\u\d\k\e\0\s\h\p\4\1\n\h\m\i\e\w\8\4\z\x\8\p\v\0\q\j\c\y\d\v\o\0\1\y\o\b\s\u\f\b\j\v\t\t\l\e\g\g\d\t\7\i\5\2\i\w\g\c\w\j\d\c\7\z\s\g\n\o\s\d\6\l\o\z\s\w\y\r\8\r\6\j\q\7\k\6\n\d\m\n\t\a\b\r\r\r\0\s\1\f\d\e\n\e\a\8\4\s\i\p\f\u\k\n\g\g\q\l\9\8\k\b\p\9\k\p\3\m\f\t\b\6\5\o\g\l\v\o\x\r\8\v\0\n\k\8\9\u\3\e\9\8\n\o\1\1\q\3\k\u\z\o\p\0\d\v\1\g\u\4\1\p\o\x\g\x\a\6\5\l\i\d\6\t\u\f\o\m\g\z\v\1\2\7\v\f\0\n\x\g\7\3\x\c\v\1\y\s\j\e\2\u\x\b\m\u\9\u\x\b\c\c\m\8\z\8\g\d\p\l\u\8\g\7\c\x\8\w\c\i\d\p\4\p\3\y\p\8\7\f\7\5\a\n\j\u\1\s\t\q\0\h\w\f\l\v\9\s\c\0\e\i\c\2\u\i\f ]] 00:06:21.293 03:09:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:21.293 03:09:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:21.293 [2024-10-09 03:09:04.393887] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:21.293 [2024-10-09 03:09:04.393992] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60655 ] 00:06:21.293 [2024-10-09 03:09:04.530611] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.552 [2024-10-09 03:09:04.607504] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.552 [2024-10-09 03:09:04.658760] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.552  [2024-10-09T03:09:05.118Z] Copying: 512/512 [B] (average 500 kBps) 00:06:21.815 00:06:21.815 03:09:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ p7oscgn6tv4qb58b4f2124dru7ms0momxch2ao58ld65tvz6d8hkj2r0fqw01o78r9eglnm4qweqyib2qcs7njt7t1ypqmhsd5whktereg1hcf07u8fwl6j9ucb2rpubwwlj5qb5nqdnd1jxzgd8u8bpo4hdytzzb28xf50zw9rqt2n7a4s0tgeoprfh2rniwrqmjq1s8sn0hwozr4641ulwbwdhsh7ismwlat5wl6vudke0shp41nhmiew84zx8pv0qjcydvo01yobsufbjvttleggdt7i52iwgcwjdc7zsgnosd6lozswyr8r6jq7k6ndmntabrrr0s1fdenea84sipfuknggql98kbp9kp3mftb65oglvoxr8v0nk89u3e98no11q3kuzop0dv1gu41poxgxa65lid6tufomgzv127vf0nxg73xcv1ysje2uxbmu9uxbccm8z8gdplu8g7cx8wcidp4p3yp87f75anju1stq0hwflv9sc0eic2uif == \p\7\o\s\c\g\n\6\t\v\4\q\b\5\8\b\4\f\2\1\2\4\d\r\u\7\m\s\0\m\o\m\x\c\h\2\a\o\5\8\l\d\6\5\t\v\z\6\d\8\h\k\j\2\r\0\f\q\w\0\1\o\7\8\r\9\e\g\l\n\m\4\q\w\e\q\y\i\b\2\q\c\s\7\n\j\t\7\t\1\y\p\q\m\h\s\d\5\w\h\k\t\e\r\e\g\1\h\c\f\0\7\u\8\f\w\l\6\j\9\u\c\b\2\r\p\u\b\w\w\l\j\5\q\b\5\n\q\d\n\d\1\j\x\z\g\d\8\u\8\b\p\o\4\h\d\y\t\z\z\b\2\8\x\f\5\0\z\w\9\r\q\t\2\n\7\a\4\s\0\t\g\e\o\p\r\f\h\2\r\n\i\w\r\q\m\j\q\1\s\8\s\n\0\h\w\o\z\r\4\6\4\1\u\l\w\b\w\d\h\s\h\7\i\s\m\w\l\a\t\5\w\l\6\v\u\d\k\e\0\s\h\p\4\1\n\h\m\i\e\w\8\4\z\x\8\p\v\0\q\j\c\y\d\v\o\0\1\y\o\b\s\u\f\b\j\v\t\t\l\e\g\g\d\t\7\i\5\2\i\w\g\c\w\j\d\c\7\z\s\g\n\o\s\d\6\l\o\z\s\w\y\r\8\r\6\j\q\7\k\6\n\d\m\n\t\a\b\r\r\r\0\s\1\f\d\e\n\e\a\8\4\s\i\p\f\u\k\n\g\g\q\l\9\8\k\b\p\9\k\p\3\m\f\t\b\6\5\o\g\l\v\o\x\r\8\v\0\n\k\8\9\u\3\e\9\8\n\o\1\1\q\3\k\u\z\o\p\0\d\v\1\g\u\4\1\p\o\x\g\x\a\6\5\l\i\d\6\t\u\f\o\m\g\z\v\1\2\7\v\f\0\n\x\g\7\3\x\c\v\1\y\s\j\e\2\u\x\b\m\u\9\u\x\b\c\c\m\8\z\8\g\d\p\l\u\8\g\7\c\x\8\w\c\i\d\p\4\p\3\y\p\8\7\f\7\5\a\n\j\u\1\s\t\q\0\h\w\f\l\v\9\s\c\0\e\i\c\2\u\i\f ]] 00:06:21.815 03:09:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:21.815 03:09:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:21.815 03:09:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:21.815 03:09:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:21.815 03:09:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:21.815 03:09:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:21.815 [2024-10-09 03:09:04.988488] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:21.815 [2024-10-09 03:09:04.988774] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60662 ] 00:06:22.075 [2024-10-09 03:09:05.126897] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.075 [2024-10-09 03:09:05.213343] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.075 [2024-10-09 03:09:05.265194] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:22.075  [2024-10-09T03:09:05.637Z] Copying: 512/512 [B] (average 500 kBps) 00:06:22.334 00:06:22.334 03:09:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xbuf3lzdc9mrwihlk3m8vntsq8xq3xeat0j1025jcayxsrzja56gbnpew8lthmihwm68wfbe36vbxy29yupfucozdpxepz4nfmiwatt32aj28omzg6xotad6x6cuuwd6cj286gxb8xyrr2lzndk42j7bf6bf84a98ep3dbcomnut19xwab1h7qo4ywtzubb0dtzu4y0duuxnuecuxac4enib9wdl502folooa5q8r9cho69arauojvya8kbox0ig99jix2ajcvs2gj7zs230u18sqdi9zkorom9f2euci08ge6w1b9mzrsgxkh3c3i3fhjq2gxcgqtlthlhc7b7g38v2cmcfkdmf7dh1v5q2hiqxzu1ipup1glhh8rdlxrcfx5vdtaxbmi1k9ttop7lg5xhienqunkfsqlj5cmpomtg890cfuapxovlz2noop3xxeg3ig9ek57ufu95cd84i9zbe0sezqgu4nwmokr2sgnx6mlar41dhfxoy00vwbbqq == \x\b\u\f\3\l\z\d\c\9\m\r\w\i\h\l\k\3\m\8\v\n\t\s\q\8\x\q\3\x\e\a\t\0\j\1\0\2\5\j\c\a\y\x\s\r\z\j\a\5\6\g\b\n\p\e\w\8\l\t\h\m\i\h\w\m\6\8\w\f\b\e\3\6\v\b\x\y\2\9\y\u\p\f\u\c\o\z\d\p\x\e\p\z\4\n\f\m\i\w\a\t\t\3\2\a\j\2\8\o\m\z\g\6\x\o\t\a\d\6\x\6\c\u\u\w\d\6\c\j\2\8\6\g\x\b\8\x\y\r\r\2\l\z\n\d\k\4\2\j\7\b\f\6\b\f\8\4\a\9\8\e\p\3\d\b\c\o\m\n\u\t\1\9\x\w\a\b\1\h\7\q\o\4\y\w\t\z\u\b\b\0\d\t\z\u\4\y\0\d\u\u\x\n\u\e\c\u\x\a\c\4\e\n\i\b\9\w\d\l\5\0\2\f\o\l\o\o\a\5\q\8\r\9\c\h\o\6\9\a\r\a\u\o\j\v\y\a\8\k\b\o\x\0\i\g\9\9\j\i\x\2\a\j\c\v\s\2\g\j\7\z\s\2\3\0\u\1\8\s\q\d\i\9\z\k\o\r\o\m\9\f\2\e\u\c\i\0\8\g\e\6\w\1\b\9\m\z\r\s\g\x\k\h\3\c\3\i\3\f\h\j\q\2\g\x\c\g\q\t\l\t\h\l\h\c\7\b\7\g\3\8\v\2\c\m\c\f\k\d\m\f\7\d\h\1\v\5\q\2\h\i\q\x\z\u\1\i\p\u\p\1\g\l\h\h\8\r\d\l\x\r\c\f\x\5\v\d\t\a\x\b\m\i\1\k\9\t\t\o\p\7\l\g\5\x\h\i\e\n\q\u\n\k\f\s\q\l\j\5\c\m\p\o\m\t\g\8\9\0\c\f\u\a\p\x\o\v\l\z\2\n\o\o\p\3\x\x\e\g\3\i\g\9\e\k\5\7\u\f\u\9\5\c\d\8\4\i\9\z\b\e\0\s\e\z\q\g\u\4\n\w\m\o\k\r\2\s\g\n\x\6\m\l\a\r\4\1\d\h\f\x\o\y\0\0\v\w\b\b\q\q ]] 00:06:22.334 03:09:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:22.334 03:09:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:22.334 [2024-10-09 03:09:05.590708] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:22.334 [2024-10-09 03:09:05.590966] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60670 ] 00:06:22.593 [2024-10-09 03:09:05.726854] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.593 [2024-10-09 03:09:05.803774] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.593 [2024-10-09 03:09:05.860793] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:22.593  [2024-10-09T03:09:06.154Z] Copying: 512/512 [B] (average 500 kBps) 00:06:22.851 00:06:22.851 03:09:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xbuf3lzdc9mrwihlk3m8vntsq8xq3xeat0j1025jcayxsrzja56gbnpew8lthmihwm68wfbe36vbxy29yupfucozdpxepz4nfmiwatt32aj28omzg6xotad6x6cuuwd6cj286gxb8xyrr2lzndk42j7bf6bf84a98ep3dbcomnut19xwab1h7qo4ywtzubb0dtzu4y0duuxnuecuxac4enib9wdl502folooa5q8r9cho69arauojvya8kbox0ig99jix2ajcvs2gj7zs230u18sqdi9zkorom9f2euci08ge6w1b9mzrsgxkh3c3i3fhjq2gxcgqtlthlhc7b7g38v2cmcfkdmf7dh1v5q2hiqxzu1ipup1glhh8rdlxrcfx5vdtaxbmi1k9ttop7lg5xhienqunkfsqlj5cmpomtg890cfuapxovlz2noop3xxeg3ig9ek57ufu95cd84i9zbe0sezqgu4nwmokr2sgnx6mlar41dhfxoy00vwbbqq == \x\b\u\f\3\l\z\d\c\9\m\r\w\i\h\l\k\3\m\8\v\n\t\s\q\8\x\q\3\x\e\a\t\0\j\1\0\2\5\j\c\a\y\x\s\r\z\j\a\5\6\g\b\n\p\e\w\8\l\t\h\m\i\h\w\m\6\8\w\f\b\e\3\6\v\b\x\y\2\9\y\u\p\f\u\c\o\z\d\p\x\e\p\z\4\n\f\m\i\w\a\t\t\3\2\a\j\2\8\o\m\z\g\6\x\o\t\a\d\6\x\6\c\u\u\w\d\6\c\j\2\8\6\g\x\b\8\x\y\r\r\2\l\z\n\d\k\4\2\j\7\b\f\6\b\f\8\4\a\9\8\e\p\3\d\b\c\o\m\n\u\t\1\9\x\w\a\b\1\h\7\q\o\4\y\w\t\z\u\b\b\0\d\t\z\u\4\y\0\d\u\u\x\n\u\e\c\u\x\a\c\4\e\n\i\b\9\w\d\l\5\0\2\f\o\l\o\o\a\5\q\8\r\9\c\h\o\6\9\a\r\a\u\o\j\v\y\a\8\k\b\o\x\0\i\g\9\9\j\i\x\2\a\j\c\v\s\2\g\j\7\z\s\2\3\0\u\1\8\s\q\d\i\9\z\k\o\r\o\m\9\f\2\e\u\c\i\0\8\g\e\6\w\1\b\9\m\z\r\s\g\x\k\h\3\c\3\i\3\f\h\j\q\2\g\x\c\g\q\t\l\t\h\l\h\c\7\b\7\g\3\8\v\2\c\m\c\f\k\d\m\f\7\d\h\1\v\5\q\2\h\i\q\x\z\u\1\i\p\u\p\1\g\l\h\h\8\r\d\l\x\r\c\f\x\5\v\d\t\a\x\b\m\i\1\k\9\t\t\o\p\7\l\g\5\x\h\i\e\n\q\u\n\k\f\s\q\l\j\5\c\m\p\o\m\t\g\8\9\0\c\f\u\a\p\x\o\v\l\z\2\n\o\o\p\3\x\x\e\g\3\i\g\9\e\k\5\7\u\f\u\9\5\c\d\8\4\i\9\z\b\e\0\s\e\z\q\g\u\4\n\w\m\o\k\r\2\s\g\n\x\6\m\l\a\r\4\1\d\h\f\x\o\y\0\0\v\w\b\b\q\q ]] 00:06:22.851 03:09:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:22.851 03:09:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:22.851 [2024-10-09 03:09:06.152194] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:22.851 [2024-10-09 03:09:06.152293] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60677 ] 00:06:23.109 [2024-10-09 03:09:06.290464] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.109 [2024-10-09 03:09:06.372604] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.368 [2024-10-09 03:09:06.427257] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.368  [2024-10-09T03:09:06.930Z] Copying: 512/512 [B] (average 166 kBps) 00:06:23.627 00:06:23.627 03:09:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xbuf3lzdc9mrwihlk3m8vntsq8xq3xeat0j1025jcayxsrzja56gbnpew8lthmihwm68wfbe36vbxy29yupfucozdpxepz4nfmiwatt32aj28omzg6xotad6x6cuuwd6cj286gxb8xyrr2lzndk42j7bf6bf84a98ep3dbcomnut19xwab1h7qo4ywtzubb0dtzu4y0duuxnuecuxac4enib9wdl502folooa5q8r9cho69arauojvya8kbox0ig99jix2ajcvs2gj7zs230u18sqdi9zkorom9f2euci08ge6w1b9mzrsgxkh3c3i3fhjq2gxcgqtlthlhc7b7g38v2cmcfkdmf7dh1v5q2hiqxzu1ipup1glhh8rdlxrcfx5vdtaxbmi1k9ttop7lg5xhienqunkfsqlj5cmpomtg890cfuapxovlz2noop3xxeg3ig9ek57ufu95cd84i9zbe0sezqgu4nwmokr2sgnx6mlar41dhfxoy00vwbbqq == \x\b\u\f\3\l\z\d\c\9\m\r\w\i\h\l\k\3\m\8\v\n\t\s\q\8\x\q\3\x\e\a\t\0\j\1\0\2\5\j\c\a\y\x\s\r\z\j\a\5\6\g\b\n\p\e\w\8\l\t\h\m\i\h\w\m\6\8\w\f\b\e\3\6\v\b\x\y\2\9\y\u\p\f\u\c\o\z\d\p\x\e\p\z\4\n\f\m\i\w\a\t\t\3\2\a\j\2\8\o\m\z\g\6\x\o\t\a\d\6\x\6\c\u\u\w\d\6\c\j\2\8\6\g\x\b\8\x\y\r\r\2\l\z\n\d\k\4\2\j\7\b\f\6\b\f\8\4\a\9\8\e\p\3\d\b\c\o\m\n\u\t\1\9\x\w\a\b\1\h\7\q\o\4\y\w\t\z\u\b\b\0\d\t\z\u\4\y\0\d\u\u\x\n\u\e\c\u\x\a\c\4\e\n\i\b\9\w\d\l\5\0\2\f\o\l\o\o\a\5\q\8\r\9\c\h\o\6\9\a\r\a\u\o\j\v\y\a\8\k\b\o\x\0\i\g\9\9\j\i\x\2\a\j\c\v\s\2\g\j\7\z\s\2\3\0\u\1\8\s\q\d\i\9\z\k\o\r\o\m\9\f\2\e\u\c\i\0\8\g\e\6\w\1\b\9\m\z\r\s\g\x\k\h\3\c\3\i\3\f\h\j\q\2\g\x\c\g\q\t\l\t\h\l\h\c\7\b\7\g\3\8\v\2\c\m\c\f\k\d\m\f\7\d\h\1\v\5\q\2\h\i\q\x\z\u\1\i\p\u\p\1\g\l\h\h\8\r\d\l\x\r\c\f\x\5\v\d\t\a\x\b\m\i\1\k\9\t\t\o\p\7\l\g\5\x\h\i\e\n\q\u\n\k\f\s\q\l\j\5\c\m\p\o\m\t\g\8\9\0\c\f\u\a\p\x\o\v\l\z\2\n\o\o\p\3\x\x\e\g\3\i\g\9\e\k\5\7\u\f\u\9\5\c\d\8\4\i\9\z\b\e\0\s\e\z\q\g\u\4\n\w\m\o\k\r\2\s\g\n\x\6\m\l\a\r\4\1\d\h\f\x\o\y\0\0\v\w\b\b\q\q ]] 00:06:23.627 03:09:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:23.627 03:09:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:23.627 [2024-10-09 03:09:06.741435] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:23.627 [2024-10-09 03:09:06.741529] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60685 ] 00:06:23.627 [2024-10-09 03:09:06.875048] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.885 [2024-10-09 03:09:06.960691] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.886 [2024-10-09 03:09:07.011974] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.886  [2024-10-09T03:09:07.447Z] Copying: 512/512 [B] (average 500 kBps) 00:06:24.144 00:06:24.145 03:09:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xbuf3lzdc9mrwihlk3m8vntsq8xq3xeat0j1025jcayxsrzja56gbnpew8lthmihwm68wfbe36vbxy29yupfucozdpxepz4nfmiwatt32aj28omzg6xotad6x6cuuwd6cj286gxb8xyrr2lzndk42j7bf6bf84a98ep3dbcomnut19xwab1h7qo4ywtzubb0dtzu4y0duuxnuecuxac4enib9wdl502folooa5q8r9cho69arauojvya8kbox0ig99jix2ajcvs2gj7zs230u18sqdi9zkorom9f2euci08ge6w1b9mzrsgxkh3c3i3fhjq2gxcgqtlthlhc7b7g38v2cmcfkdmf7dh1v5q2hiqxzu1ipup1glhh8rdlxrcfx5vdtaxbmi1k9ttop7lg5xhienqunkfsqlj5cmpomtg890cfuapxovlz2noop3xxeg3ig9ek57ufu95cd84i9zbe0sezqgu4nwmokr2sgnx6mlar41dhfxoy00vwbbqq == \x\b\u\f\3\l\z\d\c\9\m\r\w\i\h\l\k\3\m\8\v\n\t\s\q\8\x\q\3\x\e\a\t\0\j\1\0\2\5\j\c\a\y\x\s\r\z\j\a\5\6\g\b\n\p\e\w\8\l\t\h\m\i\h\w\m\6\8\w\f\b\e\3\6\v\b\x\y\2\9\y\u\p\f\u\c\o\z\d\p\x\e\p\z\4\n\f\m\i\w\a\t\t\3\2\a\j\2\8\o\m\z\g\6\x\o\t\a\d\6\x\6\c\u\u\w\d\6\c\j\2\8\6\g\x\b\8\x\y\r\r\2\l\z\n\d\k\4\2\j\7\b\f\6\b\f\8\4\a\9\8\e\p\3\d\b\c\o\m\n\u\t\1\9\x\w\a\b\1\h\7\q\o\4\y\w\t\z\u\b\b\0\d\t\z\u\4\y\0\d\u\u\x\n\u\e\c\u\x\a\c\4\e\n\i\b\9\w\d\l\5\0\2\f\o\l\o\o\a\5\q\8\r\9\c\h\o\6\9\a\r\a\u\o\j\v\y\a\8\k\b\o\x\0\i\g\9\9\j\i\x\2\a\j\c\v\s\2\g\j\7\z\s\2\3\0\u\1\8\s\q\d\i\9\z\k\o\r\o\m\9\f\2\e\u\c\i\0\8\g\e\6\w\1\b\9\m\z\r\s\g\x\k\h\3\c\3\i\3\f\h\j\q\2\g\x\c\g\q\t\l\t\h\l\h\c\7\b\7\g\3\8\v\2\c\m\c\f\k\d\m\f\7\d\h\1\v\5\q\2\h\i\q\x\z\u\1\i\p\u\p\1\g\l\h\h\8\r\d\l\x\r\c\f\x\5\v\d\t\a\x\b\m\i\1\k\9\t\t\o\p\7\l\g\5\x\h\i\e\n\q\u\n\k\f\s\q\l\j\5\c\m\p\o\m\t\g\8\9\0\c\f\u\a\p\x\o\v\l\z\2\n\o\o\p\3\x\x\e\g\3\i\g\9\e\k\5\7\u\f\u\9\5\c\d\8\4\i\9\z\b\e\0\s\e\z\q\g\u\4\n\w\m\o\k\r\2\s\g\n\x\6\m\l\a\r\4\1\d\h\f\x\o\y\0\0\v\w\b\b\q\q ]] 00:06:24.145 00:06:24.145 real 0m4.770s 00:06:24.145 user 0m2.611s 00:06:24.145 sys 0m1.195s 00:06:24.145 03:09:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.145 03:09:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:24.145 ************************************ 00:06:24.145 END TEST dd_flags_misc_forced_aio 00:06:24.145 ************************************ 00:06:24.145 03:09:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:06:24.145 03:09:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:24.145 03:09:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:24.145 ************************************ 00:06:24.145 END TEST spdk_dd_posix 00:06:24.145 ************************************ 00:06:24.145 00:06:24.145 real 0m23.719s 00:06:24.145 user 0m12.255s 00:06:24.145 sys 0m7.881s 00:06:24.145 03:09:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.145 03:09:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:24.145 03:09:07 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:24.145 03:09:07 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.145 03:09:07 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.145 03:09:07 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:24.145 ************************************ 00:06:24.145 START TEST spdk_dd_malloc 00:06:24.145 ************************************ 00:06:24.145 03:09:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:24.145 * Looking for test storage... 00:06:24.404 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:24.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.404 --rc genhtml_branch_coverage=1 00:06:24.404 --rc genhtml_function_coverage=1 00:06:24.404 --rc genhtml_legend=1 00:06:24.404 --rc geninfo_all_blocks=1 00:06:24.404 --rc geninfo_unexecuted_blocks=1 00:06:24.404 00:06:24.404 ' 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:24.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.404 --rc genhtml_branch_coverage=1 00:06:24.404 --rc genhtml_function_coverage=1 00:06:24.404 --rc genhtml_legend=1 00:06:24.404 --rc geninfo_all_blocks=1 00:06:24.404 --rc geninfo_unexecuted_blocks=1 00:06:24.404 00:06:24.404 ' 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:24.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.404 --rc genhtml_branch_coverage=1 00:06:24.404 --rc genhtml_function_coverage=1 00:06:24.404 --rc genhtml_legend=1 00:06:24.404 --rc geninfo_all_blocks=1 00:06:24.404 --rc geninfo_unexecuted_blocks=1 00:06:24.404 00:06:24.404 ' 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:24.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.404 --rc genhtml_branch_coverage=1 00:06:24.404 --rc genhtml_function_coverage=1 00:06:24.404 --rc genhtml_legend=1 00:06:24.404 --rc geninfo_all_blocks=1 00:06:24.404 --rc geninfo_unexecuted_blocks=1 00:06:24.404 00:06:24.404 ' 00:06:24.404 03:09:07 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:24.405 03:09:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:06:24.405 03:09:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:24.405 03:09:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:24.405 03:09:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:24.405 03:09:07 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.405 03:09:07 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.405 03:09:07 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.405 03:09:07 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:06:24.405 03:09:07 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.405 03:09:07 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:24.405 03:09:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.405 03:09:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.405 03:09:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:24.405 ************************************ 00:06:24.405 START TEST dd_malloc_copy 00:06:24.405 ************************************ 00:06:24.405 03:09:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1125 -- # malloc_copy 00:06:24.405 03:09:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:24.405 03:09:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:24.405 03:09:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:24.405 03:09:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:24.405 03:09:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:24.405 03:09:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:24.405 03:09:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:24.405 03:09:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:06:24.405 03:09:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:24.405 03:09:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:24.405 [2024-10-09 03:09:07.627377] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:24.405 [2024-10-09 03:09:07.627636] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60767 ] 00:06:24.405 { 00:06:24.405 "subsystems": [ 00:06:24.405 { 00:06:24.405 "subsystem": "bdev", 00:06:24.405 "config": [ 00:06:24.405 { 00:06:24.405 "params": { 00:06:24.405 "block_size": 512, 00:06:24.405 "num_blocks": 1048576, 00:06:24.405 "name": "malloc0" 00:06:24.405 }, 00:06:24.405 "method": "bdev_malloc_create" 00:06:24.405 }, 00:06:24.405 { 00:06:24.405 "params": { 00:06:24.405 "block_size": 512, 00:06:24.405 "num_blocks": 1048576, 00:06:24.405 "name": "malloc1" 00:06:24.405 }, 00:06:24.405 "method": "bdev_malloc_create" 00:06:24.405 }, 00:06:24.405 { 00:06:24.405 "method": "bdev_wait_for_examine" 00:06:24.405 } 00:06:24.405 ] 00:06:24.405 } 00:06:24.405 ] 00:06:24.405 } 00:06:24.664 [2024-10-09 03:09:07.767639] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.664 [2024-10-09 03:09:07.867066] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.664 [2024-10-09 03:09:07.921816] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:26.093  [2024-10-09T03:09:10.333Z] Copying: 223/512 [MB] (223 MBps) [2024-10-09T03:09:10.591Z] Copying: 450/512 [MB] (227 MBps) [2024-10-09T03:09:11.158Z] Copying: 512/512 [MB] (average 224 MBps) 00:06:27.855 00:06:27.855 03:09:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:27.855 03:09:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:06:27.855 03:09:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:27.855 03:09:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:28.114 [2024-10-09 03:09:11.177795] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:28.115 [2024-10-09 03:09:11.178084] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60814 ] 00:06:28.115 { 00:06:28.115 "subsystems": [ 00:06:28.115 { 00:06:28.115 "subsystem": "bdev", 00:06:28.115 "config": [ 00:06:28.115 { 00:06:28.115 "params": { 00:06:28.115 "block_size": 512, 00:06:28.115 "num_blocks": 1048576, 00:06:28.115 "name": "malloc0" 00:06:28.115 }, 00:06:28.115 "method": "bdev_malloc_create" 00:06:28.115 }, 00:06:28.115 { 00:06:28.115 "params": { 00:06:28.115 "block_size": 512, 00:06:28.115 "num_blocks": 1048576, 00:06:28.115 "name": "malloc1" 00:06:28.115 }, 00:06:28.115 "method": "bdev_malloc_create" 00:06:28.115 }, 00:06:28.115 { 00:06:28.115 "method": "bdev_wait_for_examine" 00:06:28.115 } 00:06:28.115 ] 00:06:28.115 } 00:06:28.115 ] 00:06:28.115 } 00:06:28.115 [2024-10-09 03:09:11.313985] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.115 [2024-10-09 03:09:11.391835] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.373 [2024-10-09 03:09:11.447962] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.752  [2024-10-09T03:09:13.991Z] Copying: 225/512 [MB] (225 MBps) [2024-10-09T03:09:14.249Z] Copying: 450/512 [MB] (225 MBps) [2024-10-09T03:09:14.817Z] Copying: 512/512 [MB] (average 225 MBps) 00:06:31.514 00:06:31.514 00:06:31.514 real 0m7.079s 00:06:31.514 user 0m6.089s 00:06:31.514 sys 0m0.836s 00:06:31.514 03:09:14 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.514 ************************************ 00:06:31.514 END TEST dd_malloc_copy 00:06:31.514 03:09:14 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:31.514 ************************************ 00:06:31.514 ************************************ 00:06:31.514 END TEST spdk_dd_malloc 00:06:31.514 ************************************ 00:06:31.514 00:06:31.514 real 0m7.319s 00:06:31.514 user 0m6.220s 00:06:31.514 sys 0m0.949s 00:06:31.514 03:09:14 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.514 03:09:14 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:31.514 03:09:14 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:31.515 03:09:14 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:31.515 03:09:14 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.515 03:09:14 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:31.515 ************************************ 00:06:31.515 START TEST spdk_dd_bdev_to_bdev 00:06:31.515 ************************************ 00:06:31.515 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:31.515 * Looking for test storage... 00:06:31.774 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # lcov --version 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:31.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.774 --rc genhtml_branch_coverage=1 00:06:31.774 --rc genhtml_function_coverage=1 00:06:31.774 --rc genhtml_legend=1 00:06:31.774 --rc geninfo_all_blocks=1 00:06:31.774 --rc geninfo_unexecuted_blocks=1 00:06:31.774 00:06:31.774 ' 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:31.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.774 --rc genhtml_branch_coverage=1 00:06:31.774 --rc genhtml_function_coverage=1 00:06:31.774 --rc genhtml_legend=1 00:06:31.774 --rc geninfo_all_blocks=1 00:06:31.774 --rc geninfo_unexecuted_blocks=1 00:06:31.774 00:06:31.774 ' 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:31.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.774 --rc genhtml_branch_coverage=1 00:06:31.774 --rc genhtml_function_coverage=1 00:06:31.774 --rc genhtml_legend=1 00:06:31.774 --rc geninfo_all_blocks=1 00:06:31.774 --rc geninfo_unexecuted_blocks=1 00:06:31.774 00:06:31.774 ' 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:31.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.774 --rc genhtml_branch_coverage=1 00:06:31.774 --rc genhtml_function_coverage=1 00:06:31.774 --rc genhtml_legend=1 00:06:31.774 --rc geninfo_all_blocks=1 00:06:31.774 --rc geninfo_unexecuted_blocks=1 00:06:31.774 00:06:31.774 ' 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.774 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.775 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.775 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:06:31.775 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.775 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:06:31.775 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:06:31.775 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:06:31.775 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:06:31.775 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:06:31.775 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:06:31.775 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:06:31.775 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:06:31.775 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:06:31.775 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:06:31.775 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:31.775 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:31.775 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:06:31.775 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:06:31.775 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:31.775 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:31.775 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:06:31.775 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:06:31.775 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:31.775 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:31.775 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.775 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:31.775 ************************************ 00:06:31.775 START TEST dd_inflate_file 00:06:31.775 ************************************ 00:06:31.775 03:09:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:31.775 [2024-10-09 03:09:14.999164] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:31.775 [2024-10-09 03:09:14.999400] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60927 ] 00:06:32.034 [2024-10-09 03:09:15.131471] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.034 [2024-10-09 03:09:15.209861] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.034 [2024-10-09 03:09:15.262774] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.292  [2024-10-09T03:09:15.595Z] Copying: 64/64 [MB] (average 1422 MBps) 00:06:32.292 00:06:32.292 ************************************ 00:06:32.293 END TEST dd_inflate_file 00:06:32.293 ************************************ 00:06:32.293 00:06:32.293 real 0m0.594s 00:06:32.293 user 0m0.336s 00:06:32.293 sys 0m0.325s 00:06:32.293 03:09:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.293 03:09:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:06:32.293 03:09:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:06:32.551 03:09:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:06:32.551 03:09:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:32.551 03:09:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:06:32.551 03:09:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:32.551 03:09:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:32.551 03:09:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:32.551 03:09:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.551 03:09:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:32.551 ************************************ 00:06:32.551 START TEST dd_copy_to_out_bdev 00:06:32.551 ************************************ 00:06:32.551 03:09:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:32.551 { 00:06:32.551 "subsystems": [ 00:06:32.551 { 00:06:32.551 "subsystem": "bdev", 00:06:32.551 "config": [ 00:06:32.551 { 00:06:32.551 "params": { 00:06:32.551 "trtype": "pcie", 00:06:32.551 "traddr": "0000:00:10.0", 00:06:32.551 "name": "Nvme0" 00:06:32.551 }, 00:06:32.551 "method": "bdev_nvme_attach_controller" 00:06:32.551 }, 00:06:32.551 { 00:06:32.551 "params": { 00:06:32.551 "trtype": "pcie", 00:06:32.551 "traddr": "0000:00:11.0", 00:06:32.551 "name": "Nvme1" 00:06:32.551 }, 00:06:32.551 "method": "bdev_nvme_attach_controller" 00:06:32.551 }, 00:06:32.551 { 00:06:32.551 "method": "bdev_wait_for_examine" 00:06:32.551 } 00:06:32.551 ] 00:06:32.551 } 00:06:32.551 ] 00:06:32.551 } 00:06:32.551 [2024-10-09 03:09:15.659594] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:32.551 [2024-10-09 03:09:15.659855] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60966 ] 00:06:32.551 [2024-10-09 03:09:15.799259] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.809 [2024-10-09 03:09:15.887700] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.809 [2024-10-09 03:09:15.941018] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:34.186  [2024-10-09T03:09:17.489Z] Copying: 48/64 [MB] (48 MBps) [2024-10-09T03:09:17.747Z] Copying: 64/64 [MB] (average 48 MBps) 00:06:34.444 00:06:34.444 ************************************ 00:06:34.444 END TEST dd_copy_to_out_bdev 00:06:34.444 ************************************ 00:06:34.444 00:06:34.444 real 0m2.090s 00:06:34.444 user 0m1.851s 00:06:34.444 sys 0m1.684s 00:06:34.444 03:09:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.444 03:09:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:34.444 03:09:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:06:34.444 03:09:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:06:34.444 03:09:17 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.445 03:09:17 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.445 03:09:17 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:34.445 ************************************ 00:06:34.445 START TEST dd_offset_magic 00:06:34.445 ************************************ 00:06:34.703 03:09:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1125 -- # offset_magic 00:06:34.703 03:09:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:06:34.703 03:09:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:06:34.703 03:09:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:06:34.703 03:09:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:34.703 03:09:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:06:34.703 03:09:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:34.703 03:09:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:34.703 03:09:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:34.703 [2024-10-09 03:09:17.799905] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:34.703 [2024-10-09 03:09:17.800004] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61011 ] 00:06:34.703 { 00:06:34.703 "subsystems": [ 00:06:34.703 { 00:06:34.703 "subsystem": "bdev", 00:06:34.703 "config": [ 00:06:34.703 { 00:06:34.703 "params": { 00:06:34.703 "trtype": "pcie", 00:06:34.703 "traddr": "0000:00:10.0", 00:06:34.703 "name": "Nvme0" 00:06:34.703 }, 00:06:34.703 "method": "bdev_nvme_attach_controller" 00:06:34.703 }, 00:06:34.703 { 00:06:34.703 "params": { 00:06:34.703 "trtype": "pcie", 00:06:34.703 "traddr": "0000:00:11.0", 00:06:34.703 "name": "Nvme1" 00:06:34.703 }, 00:06:34.703 "method": "bdev_nvme_attach_controller" 00:06:34.703 }, 00:06:34.703 { 00:06:34.703 "method": "bdev_wait_for_examine" 00:06:34.703 } 00:06:34.703 ] 00:06:34.703 } 00:06:34.703 ] 00:06:34.703 } 00:06:34.703 [2024-10-09 03:09:17.939117] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.962 [2024-10-09 03:09:18.024103] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.962 [2024-10-09 03:09:18.077182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:35.221  [2024-10-09T03:09:18.783Z] Copying: 65/65 [MB] (average 812 MBps) 00:06:35.480 00:06:35.480 03:09:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:35.480 03:09:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:06:35.480 03:09:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:35.480 03:09:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:35.480 [2024-10-09 03:09:18.641359] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:35.480 [2024-10-09 03:09:18.641919] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61031 ] 00:06:35.480 { 00:06:35.480 "subsystems": [ 00:06:35.480 { 00:06:35.480 "subsystem": "bdev", 00:06:35.480 "config": [ 00:06:35.480 { 00:06:35.480 "params": { 00:06:35.480 "trtype": "pcie", 00:06:35.480 "traddr": "0000:00:10.0", 00:06:35.480 "name": "Nvme0" 00:06:35.480 }, 00:06:35.480 "method": "bdev_nvme_attach_controller" 00:06:35.480 }, 00:06:35.480 { 00:06:35.480 "params": { 00:06:35.480 "trtype": "pcie", 00:06:35.480 "traddr": "0000:00:11.0", 00:06:35.480 "name": "Nvme1" 00:06:35.480 }, 00:06:35.480 "method": "bdev_nvme_attach_controller" 00:06:35.480 }, 00:06:35.480 { 00:06:35.480 "method": "bdev_wait_for_examine" 00:06:35.480 } 00:06:35.480 ] 00:06:35.480 } 00:06:35.480 ] 00:06:35.480 } 00:06:35.480 [2024-10-09 03:09:18.779283] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.739 [2024-10-09 03:09:18.874774] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.739 [2024-10-09 03:09:18.930330] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:35.997  [2024-10-09T03:09:19.559Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:36.256 00:06:36.256 03:09:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:36.256 03:09:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:36.256 03:09:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:36.256 03:09:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:36.256 03:09:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:06:36.256 03:09:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:36.256 03:09:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:36.256 [2024-10-09 03:09:19.386560] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:36.256 [2024-10-09 03:09:19.386832] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61044 ] 00:06:36.256 { 00:06:36.256 "subsystems": [ 00:06:36.256 { 00:06:36.256 "subsystem": "bdev", 00:06:36.256 "config": [ 00:06:36.256 { 00:06:36.256 "params": { 00:06:36.256 "trtype": "pcie", 00:06:36.256 "traddr": "0000:00:10.0", 00:06:36.256 "name": "Nvme0" 00:06:36.256 }, 00:06:36.256 "method": "bdev_nvme_attach_controller" 00:06:36.256 }, 00:06:36.256 { 00:06:36.256 "params": { 00:06:36.256 "trtype": "pcie", 00:06:36.256 "traddr": "0000:00:11.0", 00:06:36.257 "name": "Nvme1" 00:06:36.257 }, 00:06:36.257 "method": "bdev_nvme_attach_controller" 00:06:36.257 }, 00:06:36.257 { 00:06:36.257 "method": "bdev_wait_for_examine" 00:06:36.257 } 00:06:36.257 ] 00:06:36.257 } 00:06:36.257 ] 00:06:36.257 } 00:06:36.257 [2024-10-09 03:09:19.520636] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.523 [2024-10-09 03:09:19.612253] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.523 [2024-10-09 03:09:19.665470] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.796  [2024-10-09T03:09:20.358Z] Copying: 65/65 [MB] (average 928 MBps) 00:06:37.055 00:06:37.055 03:09:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:06:37.055 03:09:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:37.055 03:09:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:37.055 03:09:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:37.055 [2024-10-09 03:09:20.255479] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:37.055 [2024-10-09 03:09:20.255577] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61064 ] 00:06:37.055 { 00:06:37.055 "subsystems": [ 00:06:37.055 { 00:06:37.055 "subsystem": "bdev", 00:06:37.055 "config": [ 00:06:37.055 { 00:06:37.055 "params": { 00:06:37.055 "trtype": "pcie", 00:06:37.055 "traddr": "0000:00:10.0", 00:06:37.055 "name": "Nvme0" 00:06:37.055 }, 00:06:37.055 "method": "bdev_nvme_attach_controller" 00:06:37.055 }, 00:06:37.055 { 00:06:37.055 "params": { 00:06:37.055 "trtype": "pcie", 00:06:37.055 "traddr": "0000:00:11.0", 00:06:37.055 "name": "Nvme1" 00:06:37.055 }, 00:06:37.055 "method": "bdev_nvme_attach_controller" 00:06:37.055 }, 00:06:37.055 { 00:06:37.055 "method": "bdev_wait_for_examine" 00:06:37.055 } 00:06:37.055 ] 00:06:37.055 } 00:06:37.055 ] 00:06:37.055 } 00:06:37.314 [2024-10-09 03:09:20.394458] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.314 [2024-10-09 03:09:20.494817] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.314 [2024-10-09 03:09:20.552454] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.573  [2024-10-09T03:09:21.134Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:37.831 00:06:37.831 03:09:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:37.831 03:09:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:37.831 00:06:37.831 real 0m3.214s 00:06:37.831 user 0m2.350s 00:06:37.831 sys 0m0.950s 00:06:37.831 03:09:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.831 ************************************ 00:06:37.831 END TEST dd_offset_magic 00:06:37.831 ************************************ 00:06:37.831 03:09:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:37.831 03:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:06:37.832 03:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:06:37.832 03:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:37.832 03:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:37.832 03:09:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:37.832 03:09:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:37.832 03:09:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:37.832 03:09:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:06:37.832 03:09:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:37.832 03:09:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:37.832 03:09:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:37.832 [2024-10-09 03:09:21.056370] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:37.832 [2024-10-09 03:09:21.056656] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61101 ] 00:06:37.832 { 00:06:37.832 "subsystems": [ 00:06:37.832 { 00:06:37.832 "subsystem": "bdev", 00:06:37.832 "config": [ 00:06:37.832 { 00:06:37.832 "params": { 00:06:37.832 "trtype": "pcie", 00:06:37.832 "traddr": "0000:00:10.0", 00:06:37.832 "name": "Nvme0" 00:06:37.832 }, 00:06:37.832 "method": "bdev_nvme_attach_controller" 00:06:37.832 }, 00:06:37.832 { 00:06:37.832 "params": { 00:06:37.832 "trtype": "pcie", 00:06:37.832 "traddr": "0000:00:11.0", 00:06:37.832 "name": "Nvme1" 00:06:37.832 }, 00:06:37.832 "method": "bdev_nvme_attach_controller" 00:06:37.832 }, 00:06:37.832 { 00:06:37.832 "method": "bdev_wait_for_examine" 00:06:37.832 } 00:06:37.832 ] 00:06:37.832 } 00:06:37.832 ] 00:06:37.832 } 00:06:38.090 [2024-10-09 03:09:21.195277] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.090 [2024-10-09 03:09:21.284732] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.090 [2024-10-09 03:09:21.336749] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:38.349  [2024-10-09T03:09:21.911Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:06:38.608 00:06:38.608 03:09:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:06:38.608 03:09:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:06:38.608 03:09:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:38.608 03:09:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:38.608 03:09:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:38.608 03:09:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:38.608 03:09:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:06:38.608 03:09:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:38.608 03:09:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:38.608 03:09:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:38.608 [2024-10-09 03:09:21.783604] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:38.608 [2024-10-09 03:09:21.783705] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61122 ] 00:06:38.608 { 00:06:38.608 "subsystems": [ 00:06:38.608 { 00:06:38.608 "subsystem": "bdev", 00:06:38.608 "config": [ 00:06:38.608 { 00:06:38.608 "params": { 00:06:38.608 "trtype": "pcie", 00:06:38.608 "traddr": "0000:00:10.0", 00:06:38.608 "name": "Nvme0" 00:06:38.608 }, 00:06:38.608 "method": "bdev_nvme_attach_controller" 00:06:38.608 }, 00:06:38.608 { 00:06:38.608 "params": { 00:06:38.608 "trtype": "pcie", 00:06:38.608 "traddr": "0000:00:11.0", 00:06:38.608 "name": "Nvme1" 00:06:38.608 }, 00:06:38.608 "method": "bdev_nvme_attach_controller" 00:06:38.608 }, 00:06:38.608 { 00:06:38.608 "method": "bdev_wait_for_examine" 00:06:38.608 } 00:06:38.608 ] 00:06:38.608 } 00:06:38.608 ] 00:06:38.608 } 00:06:38.868 [2024-10-09 03:09:21.921918] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.868 [2024-10-09 03:09:22.025278] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.868 [2024-10-09 03:09:22.080046] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.127  [2024-10-09T03:09:22.689Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:06:39.386 00:06:39.386 03:09:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:06:39.386 00:06:39.386 real 0m7.786s 00:06:39.386 user 0m5.784s 00:06:39.386 sys 0m3.694s 00:06:39.386 03:09:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.386 ************************************ 00:06:39.386 END TEST spdk_dd_bdev_to_bdev 00:06:39.386 ************************************ 00:06:39.386 03:09:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:39.386 03:09:22 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:06:39.386 03:09:22 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:39.386 03:09:22 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.386 03:09:22 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.386 03:09:22 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:39.386 ************************************ 00:06:39.386 START TEST spdk_dd_uring 00:06:39.386 ************************************ 00:06:39.386 03:09:22 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:39.386 * Looking for test storage... 00:06:39.386 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:39.386 03:09:22 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:39.386 03:09:22 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:39.386 03:09:22 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # lcov --version 00:06:39.645 03:09:22 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:39.645 03:09:22 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.645 03:09:22 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.645 03:09:22 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.645 03:09:22 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.645 03:09:22 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.645 03:09:22 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.645 03:09:22 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.645 03:09:22 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.645 03:09:22 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.645 03:09:22 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.645 03:09:22 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.645 03:09:22 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:06:39.645 03:09:22 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:06:39.645 03:09:22 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.645 03:09:22 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.645 03:09:22 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:06:39.645 03:09:22 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:06:39.645 03:09:22 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.645 03:09:22 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:06:39.645 03:09:22 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.645 03:09:22 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:06:39.645 03:09:22 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:06:39.645 03:09:22 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.645 03:09:22 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:06:39.645 03:09:22 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.645 03:09:22 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.645 03:09:22 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.645 03:09:22 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:06:39.645 03:09:22 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.645 03:09:22 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:39.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.645 --rc genhtml_branch_coverage=1 00:06:39.645 --rc genhtml_function_coverage=1 00:06:39.645 --rc genhtml_legend=1 00:06:39.645 --rc geninfo_all_blocks=1 00:06:39.645 --rc geninfo_unexecuted_blocks=1 00:06:39.645 00:06:39.645 ' 00:06:39.645 03:09:22 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:39.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.645 --rc genhtml_branch_coverage=1 00:06:39.645 --rc genhtml_function_coverage=1 00:06:39.645 --rc genhtml_legend=1 00:06:39.645 --rc geninfo_all_blocks=1 00:06:39.645 --rc geninfo_unexecuted_blocks=1 00:06:39.645 00:06:39.645 ' 00:06:39.645 03:09:22 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:39.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.645 --rc genhtml_branch_coverage=1 00:06:39.645 --rc genhtml_function_coverage=1 00:06:39.645 --rc genhtml_legend=1 00:06:39.646 --rc geninfo_all_blocks=1 00:06:39.646 --rc geninfo_unexecuted_blocks=1 00:06:39.646 00:06:39.646 ' 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:39.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.646 --rc genhtml_branch_coverage=1 00:06:39.646 --rc genhtml_function_coverage=1 00:06:39.646 --rc genhtml_legend=1 00:06:39.646 --rc geninfo_all_blocks=1 00:06:39.646 --rc geninfo_unexecuted_blocks=1 00:06:39.646 00:06:39.646 ' 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:39.646 ************************************ 00:06:39.646 START TEST dd_uring_copy 00:06:39.646 ************************************ 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1125 -- # uring_zram_copy 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=k6gozdwz1qdr27si3hfuoa7x3xv8snn4k8s17wm82lfhmiqn71tcvtzyoc6gl37vwm1turzaxwv8dworubf19flt92ftcbaalkivs9zgkjv53yqr3hd0fo6d6p6mxpm948tw7aolpsiz19ze653x5bjsc3bkwshctw9img23l3xmumef4kyj45odaes18nwn3xm9fyudedzrh214zxe06lf0cgc2r7faeurxv4s2hs92zjj7tej68bn07qokrhssi62ujld8uyylponnlo9g4zfbo3z95n9l8eenenkrxobzukneyfo7t9y07vefmazp0mj6tzezzdrqca55pnbuc0sr5joyg5xw3zkmyski856cx5k4owp9yccpk3ye4e3i9veycxphnom9gbyqe53mj58gruvl9n75e8pieg0bykawsvnphp0lgr5zr8n45ly5kaxa6960yl248ocl01lyd7yw57mj4ruujw4b1iymjuuqm6wejlmbhhtw9tkletfq8e0xuufehhs4qmr8pnmanugko9sqtseizbbx5qvtd4ii9m6iji3g6qhh3rf5ezfck5u3bnenbqmcxize14ean4030aud8vdkr17stmeqv1fe7w9ihm79x9g5ppwdmtq3mg29oj5rgnm87ddw8xqrzlmzwckn2m2bbirk3zvy6mh3d7fju4j36c3wjqbruxtweone2l95h27s4xd7yy6kisyy1f7n7c5zr73cjrepbpxxramq0t675c6f2gujpycq0jz78upflw68js8qbw6zztgwtx32q5y7bid2agxpatdlfqusr2718ivo5q35y4kakq1ztlywpcvdbudtpb2j10jct5nb21z7ttq3qnvjq4yq2qt39xxnkv0kf1ur6llmu42yfxgymj14cgx2qprem5fha0mfsok2m7ay6jnnpdmc5lr7zw743p81c3814vtxget6gjms1n2eo8zn8osepmkkvpi7exu6xixp4646asq0v9d7agdw1rzvyvhadagx 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo k6gozdwz1qdr27si3hfuoa7x3xv8snn4k8s17wm82lfhmiqn71tcvtzyoc6gl37vwm1turzaxwv8dworubf19flt92ftcbaalkivs9zgkjv53yqr3hd0fo6d6p6mxpm948tw7aolpsiz19ze653x5bjsc3bkwshctw9img23l3xmumef4kyj45odaes18nwn3xm9fyudedzrh214zxe06lf0cgc2r7faeurxv4s2hs92zjj7tej68bn07qokrhssi62ujld8uyylponnlo9g4zfbo3z95n9l8eenenkrxobzukneyfo7t9y07vefmazp0mj6tzezzdrqca55pnbuc0sr5joyg5xw3zkmyski856cx5k4owp9yccpk3ye4e3i9veycxphnom9gbyqe53mj58gruvl9n75e8pieg0bykawsvnphp0lgr5zr8n45ly5kaxa6960yl248ocl01lyd7yw57mj4ruujw4b1iymjuuqm6wejlmbhhtw9tkletfq8e0xuufehhs4qmr8pnmanugko9sqtseizbbx5qvtd4ii9m6iji3g6qhh3rf5ezfck5u3bnenbqmcxize14ean4030aud8vdkr17stmeqv1fe7w9ihm79x9g5ppwdmtq3mg29oj5rgnm87ddw8xqrzlmzwckn2m2bbirk3zvy6mh3d7fju4j36c3wjqbruxtweone2l95h27s4xd7yy6kisyy1f7n7c5zr73cjrepbpxxramq0t675c6f2gujpycq0jz78upflw68js8qbw6zztgwtx32q5y7bid2agxpatdlfqusr2718ivo5q35y4kakq1ztlywpcvdbudtpb2j10jct5nb21z7ttq3qnvjq4yq2qt39xxnkv0kf1ur6llmu42yfxgymj14cgx2qprem5fha0mfsok2m7ay6jnnpdmc5lr7zw743p81c3814vtxget6gjms1n2eo8zn8osepmkkvpi7exu6xixp4646asq0v9d7agdw1rzvyvhadagx 00:06:39.646 03:09:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:06:39.646 [2024-10-09 03:09:22.877399] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:39.646 [2024-10-09 03:09:22.877492] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61200 ] 00:06:39.905 [2024-10-09 03:09:23.010146] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.905 [2024-10-09 03:09:23.126652] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.905 [2024-10-09 03:09:23.184320] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:40.842  [2024-10-09T03:09:24.403Z] Copying: 511/511 [MB] (average 1017 MBps) 00:06:41.100 00:06:41.100 03:09:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:06:41.100 03:09:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:06:41.100 03:09:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:41.100 03:09:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:41.359 [2024-10-09 03:09:24.402759] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:41.359 [2024-10-09 03:09:24.402855] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61216 ] 00:06:41.359 { 00:06:41.359 "subsystems": [ 00:06:41.359 { 00:06:41.359 "subsystem": "bdev", 00:06:41.359 "config": [ 00:06:41.359 { 00:06:41.359 "params": { 00:06:41.359 "block_size": 512, 00:06:41.359 "num_blocks": 1048576, 00:06:41.359 "name": "malloc0" 00:06:41.359 }, 00:06:41.359 "method": "bdev_malloc_create" 00:06:41.359 }, 00:06:41.359 { 00:06:41.359 "params": { 00:06:41.359 "filename": "/dev/zram1", 00:06:41.359 "name": "uring0" 00:06:41.359 }, 00:06:41.359 "method": "bdev_uring_create" 00:06:41.359 }, 00:06:41.359 { 00:06:41.359 "method": "bdev_wait_for_examine" 00:06:41.359 } 00:06:41.359 ] 00:06:41.359 } 00:06:41.359 ] 00:06:41.359 } 00:06:41.359 [2024-10-09 03:09:24.537965] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.359 [2024-10-09 03:09:24.623268] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.618 [2024-10-09 03:09:24.678192] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:43.002  [2024-10-09T03:09:27.242Z] Copying: 228/512 [MB] (228 MBps) [2024-10-09T03:09:27.242Z] Copying: 455/512 [MB] (226 MBps) [2024-10-09T03:09:27.810Z] Copying: 512/512 [MB] (average 223 MBps) 00:06:44.507 00:06:44.507 03:09:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:06:44.507 03:09:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:06:44.507 03:09:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:44.507 03:09:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:44.507 [2024-10-09 03:09:27.639426] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:44.507 [2024-10-09 03:09:27.639526] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61260 ] 00:06:44.507 { 00:06:44.507 "subsystems": [ 00:06:44.507 { 00:06:44.507 "subsystem": "bdev", 00:06:44.507 "config": [ 00:06:44.507 { 00:06:44.507 "params": { 00:06:44.507 "block_size": 512, 00:06:44.507 "num_blocks": 1048576, 00:06:44.507 "name": "malloc0" 00:06:44.507 }, 00:06:44.507 "method": "bdev_malloc_create" 00:06:44.507 }, 00:06:44.507 { 00:06:44.507 "params": { 00:06:44.507 "filename": "/dev/zram1", 00:06:44.507 "name": "uring0" 00:06:44.507 }, 00:06:44.507 "method": "bdev_uring_create" 00:06:44.507 }, 00:06:44.507 { 00:06:44.507 "method": "bdev_wait_for_examine" 00:06:44.507 } 00:06:44.507 ] 00:06:44.507 } 00:06:44.507 ] 00:06:44.507 } 00:06:44.507 [2024-10-09 03:09:27.778131] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.766 [2024-10-09 03:09:27.861225] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.767 [2024-10-09 03:09:27.917071] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:46.148  [2024-10-09T03:09:30.388Z] Copying: 187/512 [MB] (187 MBps) [2024-10-09T03:09:31.324Z] Copying: 350/512 [MB] (163 MBps) [2024-10-09T03:09:31.583Z] Copying: 512/512 [MB] (average 177 MBps) 00:06:48.280 00:06:48.280 03:09:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:06:48.280 03:09:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ k6gozdwz1qdr27si3hfuoa7x3xv8snn4k8s17wm82lfhmiqn71tcvtzyoc6gl37vwm1turzaxwv8dworubf19flt92ftcbaalkivs9zgkjv53yqr3hd0fo6d6p6mxpm948tw7aolpsiz19ze653x5bjsc3bkwshctw9img23l3xmumef4kyj45odaes18nwn3xm9fyudedzrh214zxe06lf0cgc2r7faeurxv4s2hs92zjj7tej68bn07qokrhssi62ujld8uyylponnlo9g4zfbo3z95n9l8eenenkrxobzukneyfo7t9y07vefmazp0mj6tzezzdrqca55pnbuc0sr5joyg5xw3zkmyski856cx5k4owp9yccpk3ye4e3i9veycxphnom9gbyqe53mj58gruvl9n75e8pieg0bykawsvnphp0lgr5zr8n45ly5kaxa6960yl248ocl01lyd7yw57mj4ruujw4b1iymjuuqm6wejlmbhhtw9tkletfq8e0xuufehhs4qmr8pnmanugko9sqtseizbbx5qvtd4ii9m6iji3g6qhh3rf5ezfck5u3bnenbqmcxize14ean4030aud8vdkr17stmeqv1fe7w9ihm79x9g5ppwdmtq3mg29oj5rgnm87ddw8xqrzlmzwckn2m2bbirk3zvy6mh3d7fju4j36c3wjqbruxtweone2l95h27s4xd7yy6kisyy1f7n7c5zr73cjrepbpxxramq0t675c6f2gujpycq0jz78upflw68js8qbw6zztgwtx32q5y7bid2agxpatdlfqusr2718ivo5q35y4kakq1ztlywpcvdbudtpb2j10jct5nb21z7ttq3qnvjq4yq2qt39xxnkv0kf1ur6llmu42yfxgymj14cgx2qprem5fha0mfsok2m7ay6jnnpdmc5lr7zw743p81c3814vtxget6gjms1n2eo8zn8osepmkkvpi7exu6xixp4646asq0v9d7agdw1rzvyvhadagx == \k\6\g\o\z\d\w\z\1\q\d\r\2\7\s\i\3\h\f\u\o\a\7\x\3\x\v\8\s\n\n\4\k\8\s\1\7\w\m\8\2\l\f\h\m\i\q\n\7\1\t\c\v\t\z\y\o\c\6\g\l\3\7\v\w\m\1\t\u\r\z\a\x\w\v\8\d\w\o\r\u\b\f\1\9\f\l\t\9\2\f\t\c\b\a\a\l\k\i\v\s\9\z\g\k\j\v\5\3\y\q\r\3\h\d\0\f\o\6\d\6\p\6\m\x\p\m\9\4\8\t\w\7\a\o\l\p\s\i\z\1\9\z\e\6\5\3\x\5\b\j\s\c\3\b\k\w\s\h\c\t\w\9\i\m\g\2\3\l\3\x\m\u\m\e\f\4\k\y\j\4\5\o\d\a\e\s\1\8\n\w\n\3\x\m\9\f\y\u\d\e\d\z\r\h\2\1\4\z\x\e\0\6\l\f\0\c\g\c\2\r\7\f\a\e\u\r\x\v\4\s\2\h\s\9\2\z\j\j\7\t\e\j\6\8\b\n\0\7\q\o\k\r\h\s\s\i\6\2\u\j\l\d\8\u\y\y\l\p\o\n\n\l\o\9\g\4\z\f\b\o\3\z\9\5\n\9\l\8\e\e\n\e\n\k\r\x\o\b\z\u\k\n\e\y\f\o\7\t\9\y\0\7\v\e\f\m\a\z\p\0\m\j\6\t\z\e\z\z\d\r\q\c\a\5\5\p\n\b\u\c\0\s\r\5\j\o\y\g\5\x\w\3\z\k\m\y\s\k\i\8\5\6\c\x\5\k\4\o\w\p\9\y\c\c\p\k\3\y\e\4\e\3\i\9\v\e\y\c\x\p\h\n\o\m\9\g\b\y\q\e\5\3\m\j\5\8\g\r\u\v\l\9\n\7\5\e\8\p\i\e\g\0\b\y\k\a\w\s\v\n\p\h\p\0\l\g\r\5\z\r\8\n\4\5\l\y\5\k\a\x\a\6\9\6\0\y\l\2\4\8\o\c\l\0\1\l\y\d\7\y\w\5\7\m\j\4\r\u\u\j\w\4\b\1\i\y\m\j\u\u\q\m\6\w\e\j\l\m\b\h\h\t\w\9\t\k\l\e\t\f\q\8\e\0\x\u\u\f\e\h\h\s\4\q\m\r\8\p\n\m\a\n\u\g\k\o\9\s\q\t\s\e\i\z\b\b\x\5\q\v\t\d\4\i\i\9\m\6\i\j\i\3\g\6\q\h\h\3\r\f\5\e\z\f\c\k\5\u\3\b\n\e\n\b\q\m\c\x\i\z\e\1\4\e\a\n\4\0\3\0\a\u\d\8\v\d\k\r\1\7\s\t\m\e\q\v\1\f\e\7\w\9\i\h\m\7\9\x\9\g\5\p\p\w\d\m\t\q\3\m\g\2\9\o\j\5\r\g\n\m\8\7\d\d\w\8\x\q\r\z\l\m\z\w\c\k\n\2\m\2\b\b\i\r\k\3\z\v\y\6\m\h\3\d\7\f\j\u\4\j\3\6\c\3\w\j\q\b\r\u\x\t\w\e\o\n\e\2\l\9\5\h\2\7\s\4\x\d\7\y\y\6\k\i\s\y\y\1\f\7\n\7\c\5\z\r\7\3\c\j\r\e\p\b\p\x\x\r\a\m\q\0\t\6\7\5\c\6\f\2\g\u\j\p\y\c\q\0\j\z\7\8\u\p\f\l\w\6\8\j\s\8\q\b\w\6\z\z\t\g\w\t\x\3\2\q\5\y\7\b\i\d\2\a\g\x\p\a\t\d\l\f\q\u\s\r\2\7\1\8\i\v\o\5\q\3\5\y\4\k\a\k\q\1\z\t\l\y\w\p\c\v\d\b\u\d\t\p\b\2\j\1\0\j\c\t\5\n\b\2\1\z\7\t\t\q\3\q\n\v\j\q\4\y\q\2\q\t\3\9\x\x\n\k\v\0\k\f\1\u\r\6\l\l\m\u\4\2\y\f\x\g\y\m\j\1\4\c\g\x\2\q\p\r\e\m\5\f\h\a\0\m\f\s\o\k\2\m\7\a\y\6\j\n\n\p\d\m\c\5\l\r\7\z\w\7\4\3\p\8\1\c\3\8\1\4\v\t\x\g\e\t\6\g\j\m\s\1\n\2\e\o\8\z\n\8\o\s\e\p\m\k\k\v\p\i\7\e\x\u\6\x\i\x\p\4\6\4\6\a\s\q\0\v\9\d\7\a\g\d\w\1\r\z\v\y\v\h\a\d\a\g\x ]] 00:06:48.280 03:09:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:06:48.280 03:09:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ k6gozdwz1qdr27si3hfuoa7x3xv8snn4k8s17wm82lfhmiqn71tcvtzyoc6gl37vwm1turzaxwv8dworubf19flt92ftcbaalkivs9zgkjv53yqr3hd0fo6d6p6mxpm948tw7aolpsiz19ze653x5bjsc3bkwshctw9img23l3xmumef4kyj45odaes18nwn3xm9fyudedzrh214zxe06lf0cgc2r7faeurxv4s2hs92zjj7tej68bn07qokrhssi62ujld8uyylponnlo9g4zfbo3z95n9l8eenenkrxobzukneyfo7t9y07vefmazp0mj6tzezzdrqca55pnbuc0sr5joyg5xw3zkmyski856cx5k4owp9yccpk3ye4e3i9veycxphnom9gbyqe53mj58gruvl9n75e8pieg0bykawsvnphp0lgr5zr8n45ly5kaxa6960yl248ocl01lyd7yw57mj4ruujw4b1iymjuuqm6wejlmbhhtw9tkletfq8e0xuufehhs4qmr8pnmanugko9sqtseizbbx5qvtd4ii9m6iji3g6qhh3rf5ezfck5u3bnenbqmcxize14ean4030aud8vdkr17stmeqv1fe7w9ihm79x9g5ppwdmtq3mg29oj5rgnm87ddw8xqrzlmzwckn2m2bbirk3zvy6mh3d7fju4j36c3wjqbruxtweone2l95h27s4xd7yy6kisyy1f7n7c5zr73cjrepbpxxramq0t675c6f2gujpycq0jz78upflw68js8qbw6zztgwtx32q5y7bid2agxpatdlfqusr2718ivo5q35y4kakq1ztlywpcvdbudtpb2j10jct5nb21z7ttq3qnvjq4yq2qt39xxnkv0kf1ur6llmu42yfxgymj14cgx2qprem5fha0mfsok2m7ay6jnnpdmc5lr7zw743p81c3814vtxget6gjms1n2eo8zn8osepmkkvpi7exu6xixp4646asq0v9d7agdw1rzvyvhadagx == \k\6\g\o\z\d\w\z\1\q\d\r\2\7\s\i\3\h\f\u\o\a\7\x\3\x\v\8\s\n\n\4\k\8\s\1\7\w\m\8\2\l\f\h\m\i\q\n\7\1\t\c\v\t\z\y\o\c\6\g\l\3\7\v\w\m\1\t\u\r\z\a\x\w\v\8\d\w\o\r\u\b\f\1\9\f\l\t\9\2\f\t\c\b\a\a\l\k\i\v\s\9\z\g\k\j\v\5\3\y\q\r\3\h\d\0\f\o\6\d\6\p\6\m\x\p\m\9\4\8\t\w\7\a\o\l\p\s\i\z\1\9\z\e\6\5\3\x\5\b\j\s\c\3\b\k\w\s\h\c\t\w\9\i\m\g\2\3\l\3\x\m\u\m\e\f\4\k\y\j\4\5\o\d\a\e\s\1\8\n\w\n\3\x\m\9\f\y\u\d\e\d\z\r\h\2\1\4\z\x\e\0\6\l\f\0\c\g\c\2\r\7\f\a\e\u\r\x\v\4\s\2\h\s\9\2\z\j\j\7\t\e\j\6\8\b\n\0\7\q\o\k\r\h\s\s\i\6\2\u\j\l\d\8\u\y\y\l\p\o\n\n\l\o\9\g\4\z\f\b\o\3\z\9\5\n\9\l\8\e\e\n\e\n\k\r\x\o\b\z\u\k\n\e\y\f\o\7\t\9\y\0\7\v\e\f\m\a\z\p\0\m\j\6\t\z\e\z\z\d\r\q\c\a\5\5\p\n\b\u\c\0\s\r\5\j\o\y\g\5\x\w\3\z\k\m\y\s\k\i\8\5\6\c\x\5\k\4\o\w\p\9\y\c\c\p\k\3\y\e\4\e\3\i\9\v\e\y\c\x\p\h\n\o\m\9\g\b\y\q\e\5\3\m\j\5\8\g\r\u\v\l\9\n\7\5\e\8\p\i\e\g\0\b\y\k\a\w\s\v\n\p\h\p\0\l\g\r\5\z\r\8\n\4\5\l\y\5\k\a\x\a\6\9\6\0\y\l\2\4\8\o\c\l\0\1\l\y\d\7\y\w\5\7\m\j\4\r\u\u\j\w\4\b\1\i\y\m\j\u\u\q\m\6\w\e\j\l\m\b\h\h\t\w\9\t\k\l\e\t\f\q\8\e\0\x\u\u\f\e\h\h\s\4\q\m\r\8\p\n\m\a\n\u\g\k\o\9\s\q\t\s\e\i\z\b\b\x\5\q\v\t\d\4\i\i\9\m\6\i\j\i\3\g\6\q\h\h\3\r\f\5\e\z\f\c\k\5\u\3\b\n\e\n\b\q\m\c\x\i\z\e\1\4\e\a\n\4\0\3\0\a\u\d\8\v\d\k\r\1\7\s\t\m\e\q\v\1\f\e\7\w\9\i\h\m\7\9\x\9\g\5\p\p\w\d\m\t\q\3\m\g\2\9\o\j\5\r\g\n\m\8\7\d\d\w\8\x\q\r\z\l\m\z\w\c\k\n\2\m\2\b\b\i\r\k\3\z\v\y\6\m\h\3\d\7\f\j\u\4\j\3\6\c\3\w\j\q\b\r\u\x\t\w\e\o\n\e\2\l\9\5\h\2\7\s\4\x\d\7\y\y\6\k\i\s\y\y\1\f\7\n\7\c\5\z\r\7\3\c\j\r\e\p\b\p\x\x\r\a\m\q\0\t\6\7\5\c\6\f\2\g\u\j\p\y\c\q\0\j\z\7\8\u\p\f\l\w\6\8\j\s\8\q\b\w\6\z\z\t\g\w\t\x\3\2\q\5\y\7\b\i\d\2\a\g\x\p\a\t\d\l\f\q\u\s\r\2\7\1\8\i\v\o\5\q\3\5\y\4\k\a\k\q\1\z\t\l\y\w\p\c\v\d\b\u\d\t\p\b\2\j\1\0\j\c\t\5\n\b\2\1\z\7\t\t\q\3\q\n\v\j\q\4\y\q\2\q\t\3\9\x\x\n\k\v\0\k\f\1\u\r\6\l\l\m\u\4\2\y\f\x\g\y\m\j\1\4\c\g\x\2\q\p\r\e\m\5\f\h\a\0\m\f\s\o\k\2\m\7\a\y\6\j\n\n\p\d\m\c\5\l\r\7\z\w\7\4\3\p\8\1\c\3\8\1\4\v\t\x\g\e\t\6\g\j\m\s\1\n\2\e\o\8\z\n\8\o\s\e\p\m\k\k\v\p\i\7\e\x\u\6\x\i\x\p\4\6\4\6\a\s\q\0\v\9\d\7\a\g\d\w\1\r\z\v\y\v\h\a\d\a\g\x ]] 00:06:48.280 03:09:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:48.539 03:09:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:06:48.539 03:09:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:06:48.539 03:09:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:48.539 03:09:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:48.539 [2024-10-09 03:09:31.769177] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:48.539 [2024-10-09 03:09:31.769252] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61325 ] 00:06:48.539 { 00:06:48.539 "subsystems": [ 00:06:48.539 { 00:06:48.539 "subsystem": "bdev", 00:06:48.539 "config": [ 00:06:48.539 { 00:06:48.539 "params": { 00:06:48.539 "block_size": 512, 00:06:48.539 "num_blocks": 1048576, 00:06:48.539 "name": "malloc0" 00:06:48.539 }, 00:06:48.539 "method": "bdev_malloc_create" 00:06:48.539 }, 00:06:48.539 { 00:06:48.539 "params": { 00:06:48.539 "filename": "/dev/zram1", 00:06:48.539 "name": "uring0" 00:06:48.539 }, 00:06:48.539 "method": "bdev_uring_create" 00:06:48.539 }, 00:06:48.539 { 00:06:48.539 "method": "bdev_wait_for_examine" 00:06:48.539 } 00:06:48.539 ] 00:06:48.539 } 00:06:48.539 ] 00:06:48.539 } 00:06:48.798 [2024-10-09 03:09:31.895959] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.798 [2024-10-09 03:09:31.975721] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.798 [2024-10-09 03:09:32.034357] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:50.175  [2024-10-09T03:09:34.415Z] Copying: 152/512 [MB] (152 MBps) [2024-10-09T03:09:35.351Z] Copying: 316/512 [MB] (164 MBps) [2024-10-09T03:09:35.610Z] Copying: 466/512 [MB] (150 MBps) [2024-10-09T03:09:36.178Z] Copying: 512/512 [MB] (average 155 MBps) 00:06:52.875 00:06:52.875 03:09:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:06:52.875 03:09:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:06:52.875 03:09:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:52.875 03:09:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:52.875 03:09:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:06:52.875 03:09:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:06:52.875 03:09:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:52.875 03:09:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:52.875 [2024-10-09 03:09:35.999304] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:52.875 [2024-10-09 03:09:35.999435] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61387 ] 00:06:52.875 { 00:06:52.875 "subsystems": [ 00:06:52.875 { 00:06:52.875 "subsystem": "bdev", 00:06:52.875 "config": [ 00:06:52.875 { 00:06:52.875 "params": { 00:06:52.875 "block_size": 512, 00:06:52.875 "num_blocks": 1048576, 00:06:52.875 "name": "malloc0" 00:06:52.875 }, 00:06:52.875 "method": "bdev_malloc_create" 00:06:52.875 }, 00:06:52.875 { 00:06:52.875 "params": { 00:06:52.875 "filename": "/dev/zram1", 00:06:52.875 "name": "uring0" 00:06:52.875 }, 00:06:52.875 "method": "bdev_uring_create" 00:06:52.875 }, 00:06:52.875 { 00:06:52.875 "params": { 00:06:52.875 "name": "uring0" 00:06:52.875 }, 00:06:52.875 "method": "bdev_uring_delete" 00:06:52.875 }, 00:06:52.875 { 00:06:52.875 "method": "bdev_wait_for_examine" 00:06:52.875 } 00:06:52.875 ] 00:06:52.875 } 00:06:52.875 ] 00:06:52.875 } 00:06:52.875 [2024-10-09 03:09:36.149240] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.133 [2024-10-09 03:09:36.267519] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.133 [2024-10-09 03:09:36.325021] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:53.392  [2024-10-09T03:09:36.953Z] Copying: 0/0 [B] (average 0 Bps) 00:06:53.650 00:06:53.912 03:09:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:06:53.912 03:09:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:53.912 03:09:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:06:53.912 03:09:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:06:53.912 03:09:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:53.912 03:09:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:53.912 03:09:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:53.912 03:09:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.913 03:09:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.913 03:09:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.913 03:09:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.913 03:09:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.913 03:09:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.913 03:09:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.913 03:09:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:53.913 03:09:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:53.913 [2024-10-09 03:09:37.013912] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:53.913 [2024-10-09 03:09:37.014021] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61418 ] 00:06:53.913 { 00:06:53.913 "subsystems": [ 00:06:53.913 { 00:06:53.913 "subsystem": "bdev", 00:06:53.913 "config": [ 00:06:53.913 { 00:06:53.913 "params": { 00:06:53.913 "block_size": 512, 00:06:53.913 "num_blocks": 1048576, 00:06:53.913 "name": "malloc0" 00:06:53.913 }, 00:06:53.913 "method": "bdev_malloc_create" 00:06:53.913 }, 00:06:53.913 { 00:06:53.913 "params": { 00:06:53.913 "filename": "/dev/zram1", 00:06:53.913 "name": "uring0" 00:06:53.913 }, 00:06:53.913 "method": "bdev_uring_create" 00:06:53.913 }, 00:06:53.913 { 00:06:53.913 "params": { 00:06:53.913 "name": "uring0" 00:06:53.913 }, 00:06:53.913 "method": "bdev_uring_delete" 00:06:53.913 }, 00:06:53.913 { 00:06:53.913 "method": "bdev_wait_for_examine" 00:06:53.913 } 00:06:53.913 ] 00:06:53.913 } 00:06:53.913 ] 00:06:53.913 } 00:06:53.913 [2024-10-09 03:09:37.150457] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.178 [2024-10-09 03:09:37.256690] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.178 [2024-10-09 03:09:37.315533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.436 [2024-10-09 03:09:37.523875] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:06:54.436 [2024-10-09 03:09:37.523942] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:06:54.436 [2024-10-09 03:09:37.523969] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:06:54.436 [2024-10-09 03:09:37.523978] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:54.695 [2024-10-09 03:09:37.840374] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:54.695 03:09:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:06:54.695 03:09:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:54.695 03:09:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:06:54.695 03:09:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:06:54.695 03:09:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:06:54.695 03:09:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:54.695 03:09:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:06:54.695 03:09:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:06:54.695 03:09:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:06:54.695 03:09:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:06:54.695 03:09:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:06:54.695 03:09:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:54.954 00:06:54.954 real 0m15.424s 00:06:54.954 user 0m10.457s 00:06:54.954 sys 0m12.678s 00:06:54.954 03:09:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.954 03:09:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:54.954 ************************************ 00:06:54.954 END TEST dd_uring_copy 00:06:54.954 ************************************ 00:06:55.213 00:06:55.213 real 0m15.687s 00:06:55.213 user 0m10.607s 00:06:55.213 sys 0m12.792s 00:06:55.213 03:09:38 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.213 03:09:38 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:55.213 ************************************ 00:06:55.213 END TEST spdk_dd_uring 00:06:55.213 ************************************ 00:06:55.213 03:09:38 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:55.213 03:09:38 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.213 03:09:38 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.213 03:09:38 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:55.213 ************************************ 00:06:55.213 START TEST spdk_dd_sparse 00:06:55.213 ************************************ 00:06:55.213 03:09:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:55.213 * Looking for test storage... 00:06:55.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:55.213 03:09:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:55.213 03:09:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # lcov --version 00:06:55.213 03:09:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:55.213 03:09:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:55.213 03:09:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.213 03:09:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.213 03:09:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.213 03:09:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.213 03:09:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:55.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.214 --rc genhtml_branch_coverage=1 00:06:55.214 --rc genhtml_function_coverage=1 00:06:55.214 --rc genhtml_legend=1 00:06:55.214 --rc geninfo_all_blocks=1 00:06:55.214 --rc geninfo_unexecuted_blocks=1 00:06:55.214 00:06:55.214 ' 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:55.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.214 --rc genhtml_branch_coverage=1 00:06:55.214 --rc genhtml_function_coverage=1 00:06:55.214 --rc genhtml_legend=1 00:06:55.214 --rc geninfo_all_blocks=1 00:06:55.214 --rc geninfo_unexecuted_blocks=1 00:06:55.214 00:06:55.214 ' 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:55.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.214 --rc genhtml_branch_coverage=1 00:06:55.214 --rc genhtml_function_coverage=1 00:06:55.214 --rc genhtml_legend=1 00:06:55.214 --rc geninfo_all_blocks=1 00:06:55.214 --rc geninfo_unexecuted_blocks=1 00:06:55.214 00:06:55.214 ' 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:55.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.214 --rc genhtml_branch_coverage=1 00:06:55.214 --rc genhtml_function_coverage=1 00:06:55.214 --rc genhtml_legend=1 00:06:55.214 --rc geninfo_all_blocks=1 00:06:55.214 --rc geninfo_unexecuted_blocks=1 00:06:55.214 00:06:55.214 ' 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:06:55.214 03:09:38 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.473 03:09:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:06:55.473 03:09:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:06:55.473 03:09:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:06:55.473 03:09:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:06:55.473 03:09:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:06:55.473 03:09:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:06:55.473 03:09:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:06:55.473 03:09:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:06:55.473 03:09:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:06:55.473 03:09:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:06:55.473 03:09:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:06:55.473 1+0 records in 00:06:55.473 1+0 records out 00:06:55.473 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00657333 s, 638 MB/s 00:06:55.473 03:09:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:06:55.473 1+0 records in 00:06:55.473 1+0 records out 00:06:55.473 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00762776 s, 550 MB/s 00:06:55.473 03:09:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:06:55.473 1+0 records in 00:06:55.473 1+0 records out 00:06:55.473 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0050195 s, 836 MB/s 00:06:55.473 03:09:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:06:55.473 03:09:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.473 03:09:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.473 03:09:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:55.473 ************************************ 00:06:55.473 START TEST dd_sparse_file_to_file 00:06:55.473 ************************************ 00:06:55.473 03:09:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1125 -- # file_to_file 00:06:55.473 03:09:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:06:55.473 03:09:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:06:55.473 03:09:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:55.473 03:09:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:06:55.473 03:09:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:06:55.473 03:09:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:06:55.473 03:09:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:06:55.474 03:09:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:06:55.474 03:09:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:06:55.474 03:09:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:55.474 [2024-10-09 03:09:38.625690] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:55.474 [2024-10-09 03:09:38.625836] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61517 ] 00:06:55.474 { 00:06:55.474 "subsystems": [ 00:06:55.474 { 00:06:55.474 "subsystem": "bdev", 00:06:55.474 "config": [ 00:06:55.474 { 00:06:55.474 "params": { 00:06:55.474 "block_size": 4096, 00:06:55.474 "filename": "dd_sparse_aio_disk", 00:06:55.474 "name": "dd_aio" 00:06:55.474 }, 00:06:55.474 "method": "bdev_aio_create" 00:06:55.474 }, 00:06:55.474 { 00:06:55.474 "params": { 00:06:55.474 "lvs_name": "dd_lvstore", 00:06:55.474 "bdev_name": "dd_aio" 00:06:55.474 }, 00:06:55.474 "method": "bdev_lvol_create_lvstore" 00:06:55.474 }, 00:06:55.474 { 00:06:55.474 "method": "bdev_wait_for_examine" 00:06:55.474 } 00:06:55.474 ] 00:06:55.474 } 00:06:55.474 ] 00:06:55.474 } 00:06:55.474 [2024-10-09 03:09:38.764649] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.732 [2024-10-09 03:09:38.880321] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.732 [2024-10-09 03:09:38.938972] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.991  [2024-10-09T03:09:39.554Z] Copying: 12/36 [MB] (average 800 MBps) 00:06:56.251 00:06:56.251 03:09:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:06:56.251 03:09:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:06:56.251 03:09:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:06:56.251 03:09:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:06:56.251 03:09:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:56.251 03:09:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:06:56.251 03:09:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:06:56.251 03:09:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:06:56.251 03:09:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:06:56.251 03:09:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:56.251 00:06:56.251 real 0m0.776s 00:06:56.251 user 0m0.504s 00:06:56.251 sys 0m0.400s 00:06:56.251 03:09:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.251 ************************************ 00:06:56.251 END TEST dd_sparse_file_to_file 00:06:56.251 03:09:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:56.251 ************************************ 00:06:56.251 03:09:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:06:56.251 03:09:39 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:56.251 03:09:39 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.251 03:09:39 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:56.251 ************************************ 00:06:56.251 START TEST dd_sparse_file_to_bdev 00:06:56.251 ************************************ 00:06:56.251 03:09:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1125 -- # file_to_bdev 00:06:56.251 03:09:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:56.251 03:09:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:06:56.251 03:09:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:06:56.251 03:09:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:06:56.251 03:09:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:06:56.251 03:09:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:06:56.251 03:09:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:56.251 03:09:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:56.251 [2024-10-09 03:09:39.448649] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:56.251 [2024-10-09 03:09:39.448744] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61560 ] 00:06:56.251 { 00:06:56.251 "subsystems": [ 00:06:56.251 { 00:06:56.251 "subsystem": "bdev", 00:06:56.251 "config": [ 00:06:56.251 { 00:06:56.251 "params": { 00:06:56.251 "block_size": 4096, 00:06:56.251 "filename": "dd_sparse_aio_disk", 00:06:56.251 "name": "dd_aio" 00:06:56.251 }, 00:06:56.251 "method": "bdev_aio_create" 00:06:56.251 }, 00:06:56.251 { 00:06:56.251 "params": { 00:06:56.251 "lvs_name": "dd_lvstore", 00:06:56.251 "lvol_name": "dd_lvol", 00:06:56.251 "size_in_mib": 36, 00:06:56.251 "thin_provision": true 00:06:56.251 }, 00:06:56.251 "method": "bdev_lvol_create" 00:06:56.251 }, 00:06:56.251 { 00:06:56.251 "method": "bdev_wait_for_examine" 00:06:56.251 } 00:06:56.251 ] 00:06:56.251 } 00:06:56.251 ] 00:06:56.251 } 00:06:56.511 [2024-10-09 03:09:39.586707] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.511 [2024-10-09 03:09:39.694784] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.511 [2024-10-09 03:09:39.753890] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.770  [2024-10-09T03:09:40.332Z] Copying: 12/36 [MB] (average 461 MBps) 00:06:57.029 00:06:57.029 00:06:57.029 real 0m0.729s 00:06:57.029 user 0m0.481s 00:06:57.029 sys 0m0.366s 00:06:57.029 03:09:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.029 03:09:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:57.029 ************************************ 00:06:57.029 END TEST dd_sparse_file_to_bdev 00:06:57.029 ************************************ 00:06:57.029 03:09:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:06:57.029 03:09:40 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:57.029 03:09:40 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.029 03:09:40 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:57.029 ************************************ 00:06:57.029 START TEST dd_sparse_bdev_to_file 00:06:57.029 ************************************ 00:06:57.029 03:09:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1125 -- # bdev_to_file 00:06:57.029 03:09:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:06:57.029 03:09:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:06:57.029 03:09:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:57.029 03:09:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:06:57.029 03:09:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:06:57.029 03:09:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:06:57.029 03:09:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:06:57.029 03:09:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:57.029 { 00:06:57.029 "subsystems": [ 00:06:57.029 { 00:06:57.029 "subsystem": "bdev", 00:06:57.029 "config": [ 00:06:57.029 { 00:06:57.029 "params": { 00:06:57.029 "block_size": 4096, 00:06:57.029 "filename": "dd_sparse_aio_disk", 00:06:57.029 "name": "dd_aio" 00:06:57.029 }, 00:06:57.029 "method": "bdev_aio_create" 00:06:57.029 }, 00:06:57.029 { 00:06:57.029 "method": "bdev_wait_for_examine" 00:06:57.029 } 00:06:57.029 ] 00:06:57.029 } 00:06:57.029 ] 00:06:57.029 } 00:06:57.029 [2024-10-09 03:09:40.231962] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:57.029 [2024-10-09 03:09:40.232129] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61598 ] 00:06:57.288 [2024-10-09 03:09:40.372868] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.288 [2024-10-09 03:09:40.471790] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.288 [2024-10-09 03:09:40.529281] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:57.547  [2024-10-09T03:09:41.109Z] Copying: 12/36 [MB] (average 1000 MBps) 00:06:57.806 00:06:57.806 03:09:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:06:57.806 03:09:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:06:57.806 03:09:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:06:57.806 03:09:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:06:57.806 03:09:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:57.806 03:09:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:06:57.806 03:09:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:06:57.806 03:09:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:06:57.806 03:09:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:06:57.806 03:09:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:57.806 00:06:57.806 real 0m0.737s 00:06:57.806 user 0m0.457s 00:06:57.806 sys 0m0.384s 00:06:57.806 03:09:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.806 03:09:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:57.806 ************************************ 00:06:57.806 END TEST dd_sparse_bdev_to_file 00:06:57.806 ************************************ 00:06:57.806 03:09:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:06:57.806 03:09:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:06:57.806 03:09:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:06:57.806 03:09:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:06:57.806 03:09:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:06:57.806 00:06:57.806 real 0m2.663s 00:06:57.806 user 0m1.641s 00:06:57.806 sys 0m1.363s 00:06:57.806 03:09:40 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.806 03:09:40 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:57.806 ************************************ 00:06:57.806 END TEST spdk_dd_sparse 00:06:57.806 ************************************ 00:06:57.806 03:09:41 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:06:57.806 03:09:41 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:57.806 03:09:41 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.806 03:09:41 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:57.806 ************************************ 00:06:57.806 START TEST spdk_dd_negative 00:06:57.806 ************************************ 00:06:57.806 03:09:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:06:57.806 * Looking for test storage... 00:06:57.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:57.806 03:09:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:57.806 03:09:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # lcov --version 00:06:57.806 03:09:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:58.066 03:09:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:58.066 03:09:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.066 03:09:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.066 03:09:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.066 03:09:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.066 03:09:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.066 03:09:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.066 03:09:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.066 03:09:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.066 03:09:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.066 03:09:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.066 03:09:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.066 03:09:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:06:58.066 03:09:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:06:58.066 03:09:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.066 03:09:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.066 03:09:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:06:58.066 03:09:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:06:58.066 03:09:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.066 03:09:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:06:58.066 03:09:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.066 03:09:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:06:58.066 03:09:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:06:58.066 03:09:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.066 03:09:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:06:58.066 03:09:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.066 03:09:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.066 03:09:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.066 03:09:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:58.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.067 --rc genhtml_branch_coverage=1 00:06:58.067 --rc genhtml_function_coverage=1 00:06:58.067 --rc genhtml_legend=1 00:06:58.067 --rc geninfo_all_blocks=1 00:06:58.067 --rc geninfo_unexecuted_blocks=1 00:06:58.067 00:06:58.067 ' 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:58.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.067 --rc genhtml_branch_coverage=1 00:06:58.067 --rc genhtml_function_coverage=1 00:06:58.067 --rc genhtml_legend=1 00:06:58.067 --rc geninfo_all_blocks=1 00:06:58.067 --rc geninfo_unexecuted_blocks=1 00:06:58.067 00:06:58.067 ' 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:58.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.067 --rc genhtml_branch_coverage=1 00:06:58.067 --rc genhtml_function_coverage=1 00:06:58.067 --rc genhtml_legend=1 00:06:58.067 --rc geninfo_all_blocks=1 00:06:58.067 --rc geninfo_unexecuted_blocks=1 00:06:58.067 00:06:58.067 ' 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:58.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.067 --rc genhtml_branch_coverage=1 00:06:58.067 --rc genhtml_function_coverage=1 00:06:58.067 --rc genhtml_legend=1 00:06:58.067 --rc geninfo_all_blocks=1 00:06:58.067 --rc geninfo_unexecuted_blocks=1 00:06:58.067 00:06:58.067 ' 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:58.067 ************************************ 00:06:58.067 START TEST dd_invalid_arguments 00:06:58.067 ************************************ 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1125 -- # invalid_arguments 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:58.067 03:09:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:58.067 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:06:58.067 00:06:58.067 CPU options: 00:06:58.067 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:06:58.067 (like [0,1,10]) 00:06:58.067 --lcores lcore to CPU mapping list. The list is in the format: 00:06:58.067 [<,lcores[@CPUs]>...] 00:06:58.067 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:58.067 Within the group, '-' is used for range separator, 00:06:58.067 ',' is used for single number separator. 00:06:58.067 '( )' can be omitted for single element group, 00:06:58.067 '@' can be omitted if cpus and lcores have the same value 00:06:58.067 --disable-cpumask-locks Disable CPU core lock files. 00:06:58.067 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:06:58.067 pollers in the app support interrupt mode) 00:06:58.067 -p, --main-core main (primary) core for DPDK 00:06:58.067 00:06:58.067 Configuration options: 00:06:58.067 -c, --config, --json JSON config file 00:06:58.067 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:58.067 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:06:58.067 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:58.067 --rpcs-allowed comma-separated list of permitted RPCS 00:06:58.067 --json-ignore-init-errors don't exit on invalid config entry 00:06:58.067 00:06:58.067 Memory options: 00:06:58.067 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:58.067 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:58.067 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:58.067 -R, --huge-unlink unlink huge files after initialization 00:06:58.067 -n, --mem-channels number of memory channels used for DPDK 00:06:58.067 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:58.067 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:58.067 --no-huge run without using hugepages 00:06:58.067 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:06:58.067 -i, --shm-id shared memory ID (optional) 00:06:58.067 -g, --single-file-segments force creating just one hugetlbfs file 00:06:58.067 00:06:58.067 PCI options: 00:06:58.067 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:58.067 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:58.067 -u, --no-pci disable PCI access 00:06:58.067 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:58.067 00:06:58.067 Log options: 00:06:58.067 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:06:58.067 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:06:58.067 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:06:58.067 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:06:58.067 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:06:58.067 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:06:58.067 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:06:58.067 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:06:58.067 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:06:58.067 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:06:58.067 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:06:58.067 --silence-noticelog disable notice level logging to stderr 00:06:58.067 00:06:58.067 Trace options: 00:06:58.067 --num-trace-entries number of trace entries for each core, must be power of 2, 00:06:58.067 setting 0 to disable trace (default 32768) 00:06:58.068 Tracepoints vary in size and can use more than one trace entry. 00:06:58.068 -e, --tpoint-group [:] 00:06:58.068 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:06:58.068 [2024-10-09 03:09:41.277800] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:06:58.068 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:06:58.068 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:06:58.068 bdev_raid, scheduler, all). 00:06:58.068 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:06:58.068 a tracepoint group. First tpoint inside a group can be enabled by 00:06:58.068 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:06:58.068 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:06:58.068 in /include/spdk_internal/trace_defs.h 00:06:58.068 00:06:58.068 Other options: 00:06:58.068 -h, --help show this usage 00:06:58.068 -v, --version print SPDK version 00:06:58.068 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:58.068 --env-context Opaque context for use of the env implementation 00:06:58.068 00:06:58.068 Application specific: 00:06:58.068 [--------- DD Options ---------] 00:06:58.068 --if Input file. Must specify either --if or --ib. 00:06:58.068 --ib Input bdev. Must specifier either --if or --ib 00:06:58.068 --of Output file. Must specify either --of or --ob. 00:06:58.068 --ob Output bdev. Must specify either --of or --ob. 00:06:58.068 --iflag Input file flags. 00:06:58.068 --oflag Output file flags. 00:06:58.068 --bs I/O unit size (default: 4096) 00:06:58.068 --qd Queue depth (default: 2) 00:06:58.068 --count I/O unit count. The number of I/O units to copy. (default: all) 00:06:58.068 --skip Skip this many I/O units at start of input. (default: 0) 00:06:58.068 --seek Skip this many I/O units at start of output. (default: 0) 00:06:58.068 --aio Force usage of AIO. (by default io_uring is used if available) 00:06:58.068 --sparse Enable hole skipping in input target 00:06:58.068 Available iflag and oflag values: 00:06:58.068 append - append mode 00:06:58.068 direct - use direct I/O for data 00:06:58.068 directory - fail unless a directory 00:06:58.068 dsync - use synchronized I/O for data 00:06:58.068 noatime - do not update access time 00:06:58.068 noctty - do not assign controlling terminal from file 00:06:58.068 nofollow - do not follow symlinks 00:06:58.068 nonblock - use non-blocking I/O 00:06:58.068 sync - use synchronized I/O for data and metadata 00:06:58.068 ************************************ 00:06:58.068 END TEST dd_invalid_arguments 00:06:58.068 ************************************ 00:06:58.068 03:09:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:06:58.068 03:09:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:58.068 03:09:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:58.068 03:09:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:58.068 00:06:58.068 real 0m0.068s 00:06:58.068 user 0m0.038s 00:06:58.068 sys 0m0.029s 00:06:58.068 03:09:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.068 03:09:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:06:58.068 03:09:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:06:58.068 03:09:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.068 03:09:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.068 03:09:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:58.068 ************************************ 00:06:58.068 START TEST dd_double_input 00:06:58.068 ************************************ 00:06:58.068 03:09:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1125 -- # double_input 00:06:58.068 03:09:41 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:58.068 03:09:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:06:58.068 03:09:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:58.068 03:09:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.068 03:09:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.068 03:09:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.068 03:09:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.068 03:09:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.068 03:09:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.068 03:09:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.068 03:09:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:58.068 03:09:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:58.327 [2024-10-09 03:09:41.398669] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:58.327 ************************************ 00:06:58.327 END TEST dd_double_input 00:06:58.327 ************************************ 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:58.327 00:06:58.327 real 0m0.065s 00:06:58.327 user 0m0.033s 00:06:58.327 sys 0m0.031s 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:58.327 ************************************ 00:06:58.327 START TEST dd_double_output 00:06:58.327 ************************************ 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1125 -- # double_output 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:58.327 [2024-10-09 03:09:41.522692] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:58.327 ************************************ 00:06:58.327 END TEST dd_double_output 00:06:58.327 ************************************ 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:58.327 00:06:58.327 real 0m0.075s 00:06:58.327 user 0m0.049s 00:06:58.327 sys 0m0.025s 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:58.327 ************************************ 00:06:58.327 START TEST dd_no_input 00:06:58.327 ************************************ 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1125 -- # no_input 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.327 03:09:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.328 03:09:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.328 03:09:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.328 03:09:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.328 03:09:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.328 03:09:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.328 03:09:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:58.328 03:09:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:58.586 [2024-10-09 03:09:41.650499] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:06:58.586 03:09:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:06:58.586 03:09:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:58.586 03:09:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:58.586 03:09:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:58.586 00:06:58.586 real 0m0.076s 00:06:58.586 user 0m0.047s 00:06:58.587 sys 0m0.028s 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.587 ************************************ 00:06:58.587 END TEST dd_no_input 00:06:58.587 ************************************ 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:58.587 ************************************ 00:06:58.587 START TEST dd_no_output 00:06:58.587 ************************************ 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1125 -- # no_output 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:58.587 [2024-10-09 03:09:41.775924] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:58.587 ************************************ 00:06:58.587 END TEST dd_no_output 00:06:58.587 ************************************ 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:58.587 00:06:58.587 real 0m0.075s 00:06:58.587 user 0m0.047s 00:06:58.587 sys 0m0.028s 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:58.587 ************************************ 00:06:58.587 START TEST dd_wrong_blocksize 00:06:58.587 ************************************ 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1125 -- # wrong_blocksize 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:58.587 03:09:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:58.846 [2024-10-09 03:09:41.902915] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:06:58.846 03:09:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:06:58.846 03:09:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:58.846 03:09:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:58.846 ************************************ 00:06:58.846 END TEST dd_wrong_blocksize 00:06:58.846 ************************************ 00:06:58.846 03:09:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:58.846 00:06:58.846 real 0m0.077s 00:06:58.846 user 0m0.049s 00:06:58.846 sys 0m0.027s 00:06:58.846 03:09:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.846 03:09:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:06:58.846 03:09:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:06:58.846 03:09:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.846 03:09:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.846 03:09:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:58.846 ************************************ 00:06:58.846 START TEST dd_smaller_blocksize 00:06:58.846 ************************************ 00:06:58.846 03:09:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1125 -- # smaller_blocksize 00:06:58.846 03:09:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:58.846 03:09:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:06:58.846 03:09:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:58.846 03:09:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.846 03:09:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.846 03:09:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.846 03:09:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.846 03:09:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.846 03:09:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.846 03:09:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.846 03:09:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:58.846 03:09:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:58.846 [2024-10-09 03:09:42.036980] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:58.846 [2024-10-09 03:09:42.037279] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61824 ] 00:06:59.105 [2024-10-09 03:09:42.178125] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.105 [2024-10-09 03:09:42.285575] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.105 [2024-10-09 03:09:42.343562] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:59.363 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:06:59.931 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:06:59.931 [2024-10-09 03:09:42.955336] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:06:59.931 [2024-10-09 03:09:42.955421] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:59.931 [2024-10-09 03:09:43.083401] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:59.931 03:09:43 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:06:59.931 03:09:43 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:59.931 03:09:43 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:06:59.931 03:09:43 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:06:59.931 03:09:43 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:06:59.931 03:09:43 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:59.931 00:06:59.931 real 0m1.200s 00:06:59.931 user 0m0.466s 00:06:59.931 sys 0m0.624s 00:06:59.931 03:09:43 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.931 ************************************ 00:06:59.931 END TEST dd_smaller_blocksize 00:06:59.931 ************************************ 00:06:59.931 03:09:43 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:06:59.931 03:09:43 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:06:59.931 03:09:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:59.931 03:09:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.931 03:09:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:59.931 ************************************ 00:06:59.931 START TEST dd_invalid_count 00:06:59.931 ************************************ 00:06:59.931 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1125 -- # invalid_count 00:06:59.931 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:59.931 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:06:59.931 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:59.931 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.931 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.931 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:00.190 [2024-10-09 03:09:43.288303] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:00.190 ************************************ 00:07:00.190 END TEST dd_invalid_count 00:07:00.190 ************************************ 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:00.190 00:07:00.190 real 0m0.078s 00:07:00.190 user 0m0.049s 00:07:00.190 sys 0m0.027s 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:00.190 ************************************ 00:07:00.190 START TEST dd_invalid_oflag 00:07:00.190 ************************************ 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1125 -- # invalid_oflag 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:00.190 [2024-10-09 03:09:43.417520] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:00.190 00:07:00.190 real 0m0.078s 00:07:00.190 user 0m0.044s 00:07:00.190 sys 0m0.032s 00:07:00.190 ************************************ 00:07:00.190 END TEST dd_invalid_oflag 00:07:00.190 ************************************ 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:00.190 ************************************ 00:07:00.190 START TEST dd_invalid_iflag 00:07:00.190 ************************************ 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1125 -- # invalid_iflag 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:00.190 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.449 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.449 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.449 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.449 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.449 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.449 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.449 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:00.449 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:00.449 [2024-10-09 03:09:43.547100] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:07:00.449 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:07:00.449 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:00.449 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:00.449 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:00.449 00:07:00.449 real 0m0.076s 00:07:00.449 user 0m0.051s 00:07:00.449 sys 0m0.025s 00:07:00.449 ************************************ 00:07:00.449 END TEST dd_invalid_iflag 00:07:00.449 ************************************ 00:07:00.449 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.449 03:09:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:00.449 03:09:43 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:07:00.449 03:09:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.449 03:09:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.449 03:09:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:00.449 ************************************ 00:07:00.449 START TEST dd_unknown_flag 00:07:00.449 ************************************ 00:07:00.449 03:09:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1125 -- # unknown_flag 00:07:00.449 03:09:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:00.449 03:09:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:07:00.449 03:09:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:00.450 03:09:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.450 03:09:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.450 03:09:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.450 03:09:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.450 03:09:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.450 03:09:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.450 03:09:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.450 03:09:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:00.450 03:09:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:00.450 [2024-10-09 03:09:43.670614] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:07:00.450 [2024-10-09 03:09:43.670873] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61922 ] 00:07:00.708 [2024-10-09 03:09:43.803208] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.708 [2024-10-09 03:09:43.887624] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.708 [2024-10-09 03:09:43.943392] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:00.708 [2024-10-09 03:09:43.980943] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:00.708 [2024-10-09 03:09:43.981014] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:00.708 [2024-10-09 03:09:43.981099] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:00.708 [2024-10-09 03:09:43.981115] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:00.708 [2024-10-09 03:09:43.981388] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:00.708 [2024-10-09 03:09:43.981405] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:00.708 [2024-10-09 03:09:43.981465] app.c:1047:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:00.708 [2024-10-09 03:09:43.981476] app.c:1047:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:00.969 [2024-10-09 03:09:44.104591] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:00.969 03:09:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:07:00.969 03:09:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:00.969 03:09:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:07:00.969 03:09:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:07:00.969 03:09:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:07:00.969 03:09:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:00.969 00:07:00.969 real 0m0.576s 00:07:00.969 user 0m0.322s 00:07:00.969 sys 0m0.160s 00:07:00.969 03:09:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.969 03:09:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:00.969 ************************************ 00:07:00.969 END TEST dd_unknown_flag 00:07:00.969 ************************************ 00:07:00.969 03:09:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:07:00.969 03:09:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.969 03:09:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.969 03:09:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:00.969 ************************************ 00:07:00.969 START TEST dd_invalid_json 00:07:00.969 ************************************ 00:07:00.969 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1125 -- # invalid_json 00:07:00.969 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:00.969 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:07:00.969 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:07:00.969 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:00.969 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.969 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.969 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.969 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.969 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.969 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.969 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.969 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:00.969 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:01.229 [2024-10-09 03:09:44.307231] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:07:01.229 [2024-10-09 03:09:44.307335] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61956 ] 00:07:01.229 [2024-10-09 03:09:44.443433] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.487 [2024-10-09 03:09:44.535789] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.488 [2024-10-09 03:09:44.535876] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:01.488 [2024-10-09 03:09:44.535905] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:01.488 [2024-10-09 03:09:44.535914] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:01.488 [2024-10-09 03:09:44.535950] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:01.488 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:07:01.488 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:01.488 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:07:01.488 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:07:01.488 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:07:01.488 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:01.488 00:07:01.488 real 0m0.372s 00:07:01.488 user 0m0.196s 00:07:01.488 sys 0m0.074s 00:07:01.488 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.488 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:01.488 ************************************ 00:07:01.488 END TEST dd_invalid_json 00:07:01.488 ************************************ 00:07:01.488 03:09:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:07:01.488 03:09:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:01.488 03:09:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.488 03:09:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:01.488 ************************************ 00:07:01.488 START TEST dd_invalid_seek 00:07:01.488 ************************************ 00:07:01.488 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1125 -- # invalid_seek 00:07:01.488 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:01.488 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:01.488 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:07:01.488 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:01.488 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:01.488 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:07:01.488 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:01.488 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@650 -- # local es=0 00:07:01.488 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:01.488 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.488 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:07:01.488 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:07:01.488 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:07:01.488 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.488 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.488 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.488 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.488 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.488 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.488 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:01.488 03:09:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:01.488 { 00:07:01.488 "subsystems": [ 00:07:01.488 { 00:07:01.488 "subsystem": "bdev", 00:07:01.488 "config": [ 00:07:01.488 { 00:07:01.488 "params": { 00:07:01.488 "block_size": 512, 00:07:01.488 "num_blocks": 512, 00:07:01.488 "name": "malloc0" 00:07:01.488 }, 00:07:01.488 "method": "bdev_malloc_create" 00:07:01.488 }, 00:07:01.488 { 00:07:01.488 "params": { 00:07:01.488 "block_size": 512, 00:07:01.488 "num_blocks": 512, 00:07:01.488 "name": "malloc1" 00:07:01.488 }, 00:07:01.488 "method": "bdev_malloc_create" 00:07:01.488 }, 00:07:01.488 { 00:07:01.488 "method": "bdev_wait_for_examine" 00:07:01.488 } 00:07:01.488 ] 00:07:01.488 } 00:07:01.488 ] 00:07:01.488 } 00:07:01.488 [2024-10-09 03:09:44.734819] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:07:01.488 [2024-10-09 03:09:44.734929] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61980 ] 00:07:01.747 [2024-10-09 03:09:44.874437] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.747 [2024-10-09 03:09:44.959872] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.747 [2024-10-09 03:09:45.014849] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:02.005 [2024-10-09 03:09:45.075419] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:07:02.005 [2024-10-09 03:09:45.075478] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:02.005 [2024-10-09 03:09:45.201221] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:02.005 03:09:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # es=228 00:07:02.005 03:09:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.005 03:09:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@662 -- # es=100 00:07:02.005 03:09:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # case "$es" in 00:07:02.005 03:09:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@670 -- # es=1 00:07:02.006 03:09:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.006 00:07:02.006 real 0m0.618s 00:07:02.006 user 0m0.409s 00:07:02.006 sys 0m0.166s 00:07:02.006 03:09:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.006 03:09:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:07:02.006 ************************************ 00:07:02.006 END TEST dd_invalid_seek 00:07:02.006 ************************************ 00:07:02.264 03:09:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:07:02.264 03:09:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.264 03:09:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.264 03:09:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:02.264 ************************************ 00:07:02.264 START TEST dd_invalid_skip 00:07:02.264 ************************************ 00:07:02.264 03:09:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1125 -- # invalid_skip 00:07:02.264 03:09:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:02.265 03:09:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:02.265 03:09:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:07:02.265 03:09:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:02.265 03:09:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:02.265 03:09:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:07:02.265 03:09:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:02.265 03:09:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@650 -- # local es=0 00:07:02.265 03:09:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:07:02.265 03:09:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:02.265 03:09:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.265 03:09:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:07:02.265 03:09:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:07:02.265 03:09:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.265 03:09:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.265 03:09:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.265 03:09:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.265 03:09:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.265 03:09:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.265 03:09:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:02.265 03:09:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:02.265 [2024-10-09 03:09:45.394240] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:07:02.265 [2024-10-09 03:09:45.394313] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62019 ] 00:07:02.265 { 00:07:02.265 "subsystems": [ 00:07:02.265 { 00:07:02.265 "subsystem": "bdev", 00:07:02.265 "config": [ 00:07:02.265 { 00:07:02.265 "params": { 00:07:02.265 "block_size": 512, 00:07:02.265 "num_blocks": 512, 00:07:02.265 "name": "malloc0" 00:07:02.265 }, 00:07:02.265 "method": "bdev_malloc_create" 00:07:02.265 }, 00:07:02.265 { 00:07:02.265 "params": { 00:07:02.265 "block_size": 512, 00:07:02.265 "num_blocks": 512, 00:07:02.265 "name": "malloc1" 00:07:02.265 }, 00:07:02.265 "method": "bdev_malloc_create" 00:07:02.265 }, 00:07:02.265 { 00:07:02.265 "method": "bdev_wait_for_examine" 00:07:02.265 } 00:07:02.265 ] 00:07:02.265 } 00:07:02.265 ] 00:07:02.265 } 00:07:02.265 [2024-10-09 03:09:45.530124] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.524 [2024-10-09 03:09:45.640326] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.524 [2024-10-09 03:09:45.698974] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:02.524 [2024-10-09 03:09:45.763749] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:07:02.524 [2024-10-09 03:09:45.763841] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:02.782 [2024-10-09 03:09:45.898591] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:02.782 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # es=228 00:07:02.782 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.782 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@662 -- # es=100 00:07:02.782 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # case "$es" in 00:07:02.782 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@670 -- # es=1 00:07:02.782 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.782 00:07:02.782 real 0m0.730s 00:07:02.782 user 0m0.515s 00:07:02.782 sys 0m0.175s 00:07:02.782 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.783 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:07:02.783 ************************************ 00:07:02.783 END TEST dd_invalid_skip 00:07:02.783 ************************************ 00:07:03.041 03:09:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:07:03.041 03:09:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.041 03:09:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.041 03:09:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:03.041 ************************************ 00:07:03.041 START TEST dd_invalid_input_count 00:07:03.041 ************************************ 00:07:03.041 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1125 -- # invalid_input_count 00:07:03.041 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:03.041 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:03.041 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:07:03.041 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:03.041 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:03.041 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:07:03.041 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:03.041 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:07:03.041 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@650 -- # local es=0 00:07:03.041 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:03.041 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:07:03.041 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.041 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:07:03.041 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.041 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.041 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.041 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.041 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.041 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.041 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:03.041 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:03.041 [2024-10-09 03:09:46.185399] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:07:03.041 [2024-10-09 03:09:46.185500] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62058 ] 00:07:03.041 { 00:07:03.041 "subsystems": [ 00:07:03.041 { 00:07:03.041 "subsystem": "bdev", 00:07:03.041 "config": [ 00:07:03.041 { 00:07:03.041 "params": { 00:07:03.041 "block_size": 512, 00:07:03.041 "num_blocks": 512, 00:07:03.041 "name": "malloc0" 00:07:03.041 }, 00:07:03.041 "method": "bdev_malloc_create" 00:07:03.041 }, 00:07:03.041 { 00:07:03.041 "params": { 00:07:03.041 "block_size": 512, 00:07:03.041 "num_blocks": 512, 00:07:03.041 "name": "malloc1" 00:07:03.041 }, 00:07:03.041 "method": "bdev_malloc_create" 00:07:03.041 }, 00:07:03.041 { 00:07:03.041 "method": "bdev_wait_for_examine" 00:07:03.041 } 00:07:03.041 ] 00:07:03.041 } 00:07:03.041 ] 00:07:03.041 } 00:07:03.041 [2024-10-09 03:09:46.326126] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.299 [2024-10-09 03:09:46.443372] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.299 [2024-10-09 03:09:46.520138] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:03.299 [2024-10-09 03:09:46.594805] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:07:03.299 [2024-10-09 03:09:46.594868] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:03.558 [2024-10-09 03:09:46.772783] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:03.817 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # es=228 00:07:03.817 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.817 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@662 -- # es=100 00:07:03.817 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # case "$es" in 00:07:03.817 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@670 -- # es=1 00:07:03.817 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.817 00:07:03.817 real 0m0.818s 00:07:03.817 user 0m0.567s 00:07:03.817 sys 0m0.211s 00:07:03.817 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.817 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:07:03.817 ************************************ 00:07:03.817 END TEST dd_invalid_input_count 00:07:03.817 ************************************ 00:07:03.817 03:09:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:07:03.817 03:09:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.817 03:09:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.817 03:09:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:03.817 ************************************ 00:07:03.817 START TEST dd_invalid_output_count 00:07:03.817 ************************************ 00:07:03.817 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1125 -- # invalid_output_count 00:07:03.817 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:03.817 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:03.817 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:07:03.817 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:03.817 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@650 -- # local es=0 00:07:03.817 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:03.817 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.817 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:07:03.817 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:07:03.817 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:07:03.817 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.817 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.817 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.817 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.817 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.817 03:09:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.817 03:09:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:03.817 03:09:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:03.817 [2024-10-09 03:09:47.054578] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:07:03.817 { 00:07:03.817 "subsystems": [ 00:07:03.817 { 00:07:03.817 "subsystem": "bdev", 00:07:03.817 "config": [ 00:07:03.817 { 00:07:03.817 "params": { 00:07:03.817 "block_size": 512, 00:07:03.817 "num_blocks": 512, 00:07:03.817 "name": "malloc0" 00:07:03.817 }, 00:07:03.817 "method": "bdev_malloc_create" 00:07:03.817 }, 00:07:03.817 { 00:07:03.817 "method": "bdev_wait_for_examine" 00:07:03.817 } 00:07:03.817 ] 00:07:03.817 } 00:07:03.817 ] 00:07:03.817 } 00:07:03.817 [2024-10-09 03:09:47.055291] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62091 ] 00:07:04.075 [2024-10-09 03:09:47.196735] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.076 [2024-10-09 03:09:47.315672] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.334 [2024-10-09 03:09:47.391735] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:04.334 [2024-10-09 03:09:47.457756] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:07:04.334 [2024-10-09 03:09:47.457850] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:04.334 [2024-10-09 03:09:47.634000] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:04.592 03:09:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # es=228 00:07:04.592 03:09:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.592 03:09:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@662 -- # es=100 00:07:04.592 03:09:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # case "$es" in 00:07:04.592 03:09:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@670 -- # es=1 00:07:04.592 03:09:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.592 00:07:04.592 real 0m0.766s 00:07:04.592 user 0m0.526s 00:07:04.592 sys 0m0.196s 00:07:04.592 03:09:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.592 03:09:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:07:04.592 ************************************ 00:07:04.592 END TEST dd_invalid_output_count 00:07:04.592 ************************************ 00:07:04.592 03:09:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:07:04.592 03:09:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.592 03:09:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.592 03:09:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:04.592 ************************************ 00:07:04.592 START TEST dd_bs_not_multiple 00:07:04.592 ************************************ 00:07:04.592 03:09:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1125 -- # bs_not_multiple 00:07:04.592 03:09:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:04.592 03:09:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:04.592 03:09:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:07:04.592 03:09:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:04.592 03:09:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:04.592 03:09:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:07:04.593 03:09:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:04.593 03:09:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@650 -- # local es=0 00:07:04.593 03:09:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:04.593 03:09:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.593 03:09:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:07:04.593 03:09:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:07:04.593 03:09:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:07:04.593 03:09:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.593 03:09:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.593 03:09:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.593 03:09:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.593 03:09:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.593 03:09:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.593 03:09:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:04.593 03:09:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:04.593 [2024-10-09 03:09:47.876470] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:07:04.593 [2024-10-09 03:09:47.876571] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62123 ] 00:07:04.593 { 00:07:04.593 "subsystems": [ 00:07:04.593 { 00:07:04.593 "subsystem": "bdev", 00:07:04.593 "config": [ 00:07:04.593 { 00:07:04.593 "params": { 00:07:04.593 "block_size": 512, 00:07:04.593 "num_blocks": 512, 00:07:04.593 "name": "malloc0" 00:07:04.593 }, 00:07:04.593 "method": "bdev_malloc_create" 00:07:04.593 }, 00:07:04.593 { 00:07:04.593 "params": { 00:07:04.593 "block_size": 512, 00:07:04.593 "num_blocks": 512, 00:07:04.593 "name": "malloc1" 00:07:04.593 }, 00:07:04.593 "method": "bdev_malloc_create" 00:07:04.593 }, 00:07:04.593 { 00:07:04.593 "method": "bdev_wait_for_examine" 00:07:04.593 } 00:07:04.593 ] 00:07:04.593 } 00:07:04.593 ] 00:07:04.593 } 00:07:04.851 [2024-10-09 03:09:48.012845] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.851 [2024-10-09 03:09:48.127127] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.110 [2024-10-09 03:09:48.208539] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:05.110 [2024-10-09 03:09:48.284687] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:07:05.110 [2024-10-09 03:09:48.284745] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:05.368 [2024-10-09 03:09:48.462884] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:05.368 03:09:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # es=234 00:07:05.368 03:09:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:05.368 03:09:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@662 -- # es=106 00:07:05.368 03:09:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # case "$es" in 00:07:05.368 03:09:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@670 -- # es=1 00:07:05.368 03:09:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:05.368 00:07:05.368 real 0m0.766s 00:07:05.368 user 0m0.500s 00:07:05.368 sys 0m0.226s 00:07:05.368 03:09:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.368 03:09:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:07:05.368 ************************************ 00:07:05.368 END TEST dd_bs_not_multiple 00:07:05.368 ************************************ 00:07:05.368 00:07:05.368 real 0m7.608s 00:07:05.368 user 0m4.294s 00:07:05.368 sys 0m2.719s 00:07:05.368 03:09:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.368 03:09:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:05.368 ************************************ 00:07:05.368 END TEST spdk_dd_negative 00:07:05.368 ************************************ 00:07:05.368 00:07:05.368 real 1m25.620s 00:07:05.368 user 0m55.436s 00:07:05.368 sys 0m37.435s 00:07:05.368 03:09:48 spdk_dd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.368 03:09:48 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:05.368 ************************************ 00:07:05.368 END TEST spdk_dd 00:07:05.368 ************************************ 00:07:05.627 03:09:48 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:05.627 03:09:48 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:07:05.627 03:09:48 -- spdk/autotest.sh@256 -- # timing_exit lib 00:07:05.627 03:09:48 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:05.627 03:09:48 -- common/autotest_common.sh@10 -- # set +x 00:07:05.627 03:09:48 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:07:05.627 03:09:48 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:07:05.627 03:09:48 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:07:05.627 03:09:48 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:07:05.627 03:09:48 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:07:05.627 03:09:48 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:07:05.627 03:09:48 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:05.627 03:09:48 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:05.627 03:09:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.627 03:09:48 -- common/autotest_common.sh@10 -- # set +x 00:07:05.627 ************************************ 00:07:05.627 START TEST nvmf_tcp 00:07:05.627 ************************************ 00:07:05.627 03:09:48 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:05.627 * Looking for test storage... 00:07:05.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:05.627 03:09:48 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:05.627 03:09:48 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:07:05.627 03:09:48 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:05.886 03:09:48 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:05.886 03:09:48 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.886 03:09:48 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.886 03:09:48 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.886 03:09:48 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.886 03:09:48 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.886 03:09:48 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.886 03:09:48 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.886 03:09:48 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.886 03:09:48 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.886 03:09:48 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.886 03:09:48 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.886 03:09:48 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:05.886 03:09:48 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:05.886 03:09:48 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.886 03:09:48 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.886 03:09:48 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:05.886 03:09:48 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:05.886 03:09:48 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.886 03:09:48 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:05.886 03:09:48 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.886 03:09:48 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:05.886 03:09:48 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:05.886 03:09:48 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.886 03:09:48 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:05.886 03:09:48 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.886 03:09:48 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.886 03:09:48 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.886 03:09:48 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:05.886 03:09:48 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.886 03:09:48 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:05.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.886 --rc genhtml_branch_coverage=1 00:07:05.886 --rc genhtml_function_coverage=1 00:07:05.886 --rc genhtml_legend=1 00:07:05.886 --rc geninfo_all_blocks=1 00:07:05.886 --rc geninfo_unexecuted_blocks=1 00:07:05.886 00:07:05.886 ' 00:07:05.886 03:09:48 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:05.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.886 --rc genhtml_branch_coverage=1 00:07:05.886 --rc genhtml_function_coverage=1 00:07:05.886 --rc genhtml_legend=1 00:07:05.886 --rc geninfo_all_blocks=1 00:07:05.886 --rc geninfo_unexecuted_blocks=1 00:07:05.886 00:07:05.886 ' 00:07:05.886 03:09:48 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:05.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.886 --rc genhtml_branch_coverage=1 00:07:05.886 --rc genhtml_function_coverage=1 00:07:05.886 --rc genhtml_legend=1 00:07:05.886 --rc geninfo_all_blocks=1 00:07:05.886 --rc geninfo_unexecuted_blocks=1 00:07:05.886 00:07:05.886 ' 00:07:05.886 03:09:48 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:05.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.886 --rc genhtml_branch_coverage=1 00:07:05.886 --rc genhtml_function_coverage=1 00:07:05.886 --rc genhtml_legend=1 00:07:05.886 --rc geninfo_all_blocks=1 00:07:05.886 --rc geninfo_unexecuted_blocks=1 00:07:05.886 00:07:05.886 ' 00:07:05.886 03:09:48 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:05.886 03:09:48 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:05.886 03:09:48 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:05.886 03:09:48 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:05.886 03:09:48 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.886 03:09:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:05.886 ************************************ 00:07:05.886 START TEST nvmf_target_core 00:07:05.886 ************************************ 00:07:05.886 03:09:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:05.886 * Looking for test storage... 00:07:05.886 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:05.886 03:09:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:05.886 03:09:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:05.886 03:09:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:07:05.886 03:09:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:05.886 03:09:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.886 03:09:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.886 03:09:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.886 03:09:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.886 03:09:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.886 03:09:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.886 03:09:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.886 03:09:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.886 03:09:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.886 03:09:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:05.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.887 --rc genhtml_branch_coverage=1 00:07:05.887 --rc genhtml_function_coverage=1 00:07:05.887 --rc genhtml_legend=1 00:07:05.887 --rc geninfo_all_blocks=1 00:07:05.887 --rc geninfo_unexecuted_blocks=1 00:07:05.887 00:07:05.887 ' 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:05.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.887 --rc genhtml_branch_coverage=1 00:07:05.887 --rc genhtml_function_coverage=1 00:07:05.887 --rc genhtml_legend=1 00:07:05.887 --rc geninfo_all_blocks=1 00:07:05.887 --rc geninfo_unexecuted_blocks=1 00:07:05.887 00:07:05.887 ' 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:05.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.887 --rc genhtml_branch_coverage=1 00:07:05.887 --rc genhtml_function_coverage=1 00:07:05.887 --rc genhtml_legend=1 00:07:05.887 --rc geninfo_all_blocks=1 00:07:05.887 --rc geninfo_unexecuted_blocks=1 00:07:05.887 00:07:05.887 ' 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:05.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.887 --rc genhtml_branch_coverage=1 00:07:05.887 --rc genhtml_function_coverage=1 00:07:05.887 --rc genhtml_legend=1 00:07:05.887 --rc geninfo_all_blocks=1 00:07:05.887 --rc geninfo_unexecuted_blocks=1 00:07:05.887 00:07:05.887 ' 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:05.887 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.887 03:09:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:06.147 ************************************ 00:07:06.147 START TEST nvmf_host_management 00:07:06.147 ************************************ 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:06.147 * Looking for test storage... 00:07:06.147 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:06.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.147 --rc genhtml_branch_coverage=1 00:07:06.147 --rc genhtml_function_coverage=1 00:07:06.147 --rc genhtml_legend=1 00:07:06.147 --rc geninfo_all_blocks=1 00:07:06.147 --rc geninfo_unexecuted_blocks=1 00:07:06.147 00:07:06.147 ' 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:06.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.147 --rc genhtml_branch_coverage=1 00:07:06.147 --rc genhtml_function_coverage=1 00:07:06.147 --rc genhtml_legend=1 00:07:06.147 --rc geninfo_all_blocks=1 00:07:06.147 --rc geninfo_unexecuted_blocks=1 00:07:06.147 00:07:06.147 ' 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:06.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.147 --rc genhtml_branch_coverage=1 00:07:06.147 --rc genhtml_function_coverage=1 00:07:06.147 --rc genhtml_legend=1 00:07:06.147 --rc geninfo_all_blocks=1 00:07:06.147 --rc geninfo_unexecuted_blocks=1 00:07:06.147 00:07:06.147 ' 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:06.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.147 --rc genhtml_branch_coverage=1 00:07:06.147 --rc genhtml_function_coverage=1 00:07:06.147 --rc genhtml_legend=1 00:07:06.147 --rc geninfo_all_blocks=1 00:07:06.147 --rc geninfo_unexecuted_blocks=1 00:07:06.147 00:07:06.147 ' 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:06.147 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:06.148 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@458 -- # nvmf_veth_init 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:06.148 Cannot find device "nvmf_init_br" 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:06.148 Cannot find device "nvmf_init_br2" 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:06.148 Cannot find device "nvmf_tgt_br" 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:07:06.148 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:06.407 Cannot find device "nvmf_tgt_br2" 00:07:06.407 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:07:06.407 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:06.407 Cannot find device "nvmf_init_br" 00:07:06.407 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:07:06.407 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:06.407 Cannot find device "nvmf_init_br2" 00:07:06.407 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:07:06.407 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:06.407 Cannot find device "nvmf_tgt_br" 00:07:06.407 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:07:06.407 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:06.407 Cannot find device "nvmf_tgt_br2" 00:07:06.407 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:07:06.407 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:06.407 Cannot find device "nvmf_br" 00:07:06.407 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:07:06.407 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:06.407 Cannot find device "nvmf_init_if" 00:07:06.407 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:07:06.407 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:06.407 Cannot find device "nvmf_init_if2" 00:07:06.407 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:07:06.407 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:06.407 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:06.407 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:07:06.407 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:06.407 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:06.407 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:07:06.407 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:06.407 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:06.407 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:06.407 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:06.407 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:06.407 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:06.407 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:06.407 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:06.407 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:06.407 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:06.407 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:06.408 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:06.408 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:06.408 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:06.408 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:06.408 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:06.408 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:06.408 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:06.408 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:06.408 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:06.408 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:06.667 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:06.667 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:07:06.667 00:07:06.667 --- 10.0.0.3 ping statistics --- 00:07:06.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.667 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:06.667 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:06.667 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:07:06.667 00:07:06.667 --- 10.0.0.4 ping statistics --- 00:07:06.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.667 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:06.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:06.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:07:06.667 00:07:06.667 --- 10.0.0.1 ping statistics --- 00:07:06.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.667 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:06.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:06.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:07:06.667 00:07:06.667 --- 10.0.0.2 ping statistics --- 00:07:06.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.667 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # return 0 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=62464 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 62464 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 62464 ']' 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.667 03:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:06.667 [2024-10-09 03:09:49.943157] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:07:06.667 [2024-10-09 03:09:49.943436] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.926 [2024-10-09 03:09:50.086714] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:06.926 [2024-10-09 03:09:50.207372] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:06.926 [2024-10-09 03:09:50.207654] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:06.926 [2024-10-09 03:09:50.207826] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:06.926 [2024-10-09 03:09:50.208124] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:06.926 [2024-10-09 03:09:50.208278] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:06.926 [2024-10-09 03:09:50.209699] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.926 [2024-10-09 03:09:50.209761] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:07:06.926 [2024-10-09 03:09:50.209810] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:07:06.926 [2024-10-09 03:09:50.209814] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.184 [2024-10-09 03:09:50.269527] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:07.752 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:07.752 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:07.752 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:07.752 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:07.752 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.752 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:07.752 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:07.752 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.752 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.014 [2024-10-09 03:09:51.053759] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:08.014 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.014 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:08.014 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:08.014 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.014 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:08.014 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:08.014 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:08.014 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.014 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.014 Malloc0 00:07:08.014 [2024-10-09 03:09:51.122101] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:08.014 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.014 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:08.014 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:08.014 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.014 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62522 00:07:08.014 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62522 /var/tmp/bdevperf.sock 00:07:08.014 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:08.014 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:08.014 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:07:08.014 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:07:08.014 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:08.014 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:08.014 { 00:07:08.014 "params": { 00:07:08.014 "name": "Nvme$subsystem", 00:07:08.014 "trtype": "$TEST_TRANSPORT", 00:07:08.014 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:08.014 "adrfam": "ipv4", 00:07:08.014 "trsvcid": "$NVMF_PORT", 00:07:08.014 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:08.014 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:08.014 "hdgst": ${hdgst:-false}, 00:07:08.014 "ddgst": ${ddgst:-false} 00:07:08.014 }, 00:07:08.014 "method": "bdev_nvme_attach_controller" 00:07:08.014 } 00:07:08.014 EOF 00:07:08.014 )") 00:07:08.014 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 62522 ']' 00:07:08.014 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:08.014 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:07:08.014 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.014 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:08.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:08.014 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.014 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.014 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:07:08.014 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:07:08.014 03:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:08.014 "params": { 00:07:08.014 "name": "Nvme0", 00:07:08.014 "trtype": "tcp", 00:07:08.014 "traddr": "10.0.0.3", 00:07:08.014 "adrfam": "ipv4", 00:07:08.014 "trsvcid": "4420", 00:07:08.014 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:08.014 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:08.014 "hdgst": false, 00:07:08.014 "ddgst": false 00:07:08.014 }, 00:07:08.014 "method": "bdev_nvme_attach_controller" 00:07:08.014 }' 00:07:08.014 [2024-10-09 03:09:51.235185] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:07:08.014 [2024-10-09 03:09:51.236044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62522 ] 00:07:08.280 [2024-10-09 03:09:51.378483] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.280 [2024-10-09 03:09:51.529173] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.538 [2024-10-09 03:09:51.612835] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:08.538 Running I/O for 10 seconds... 00:07:09.107 03:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.107 03:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:09.107 03:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:09.107 03:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.107 03:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.107 03:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.107 03:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:09.107 03:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:09.107 03:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:09.107 03:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:09.107 03:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:09.107 03:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:09.107 03:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:09.107 03:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:09.107 03:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:09.107 03:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:09.107 03:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.107 03:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.107 03:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.107 03:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:07:09.108 03:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:07:09.108 03:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:09.108 03:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:09.108 03:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:09.108 03:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:09.108 03:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.108 03:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.108 [2024-10-09 03:09:52.309367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:09.108 [2024-10-09 03:09:52.309448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.309480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:09.108 [2024-10-09 03:09:52.309489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.309499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:09.108 [2024-10-09 03:09:52.309506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.309515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:09.108 [2024-10-09 03:09:52.309523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.309532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a58b20 is same with the state(6) to be set 00:07:09.108 03:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.108 03:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:09.108 03:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.108 03:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.108 03:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.108 03:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:09.108 [2024-10-09 03:09:52.328112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.108 [2024-10-09 03:09:52.328142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.328193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.108 [2024-10-09 03:09:52.328202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.328212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.108 [2024-10-09 03:09:52.328222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.328232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.108 [2024-10-09 03:09:52.328240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.328251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.108 [2024-10-09 03:09:52.328259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.328269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.108 [2024-10-09 03:09:52.328277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.328287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.108 [2024-10-09 03:09:52.328295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.328305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.108 [2024-10-09 03:09:52.328314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.328324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.108 [2024-10-09 03:09:52.328333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.328343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.108 [2024-10-09 03:09:52.328352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.328362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.108 [2024-10-09 03:09:52.328371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.328380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.108 [2024-10-09 03:09:52.328388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.328398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.108 [2024-10-09 03:09:52.328416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.328426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.108 [2024-10-09 03:09:52.328434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.328444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.108 [2024-10-09 03:09:52.328452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.328462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.108 [2024-10-09 03:09:52.328470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.328480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.108 [2024-10-09 03:09:52.328495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.328505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.108 [2024-10-09 03:09:52.328528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.328538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.108 [2024-10-09 03:09:52.328547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.328558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.108 [2024-10-09 03:09:52.328566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.328576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.108 [2024-10-09 03:09:52.328584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.328594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.108 [2024-10-09 03:09:52.328602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.328611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.108 [2024-10-09 03:09:52.328619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.328629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.108 [2024-10-09 03:09:52.328637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.328647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.108 [2024-10-09 03:09:52.328655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.328666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.108 [2024-10-09 03:09:52.328674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.328684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.108 [2024-10-09 03:09:52.328692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.328702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.108 [2024-10-09 03:09:52.328710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.328720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.108 [2024-10-09 03:09:52.328728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.328737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.108 [2024-10-09 03:09:52.328745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.328755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.108 [2024-10-09 03:09:52.328763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.108 [2024-10-09 03:09:52.328773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.108 [2024-10-09 03:09:52.328781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.109 [2024-10-09 03:09:52.328790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.109 [2024-10-09 03:09:52.328804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.109 [2024-10-09 03:09:52.328813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.109 [2024-10-09 03:09:52.328822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.109 [2024-10-09 03:09:52.328832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.109 [2024-10-09 03:09:52.328841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.109 [2024-10-09 03:09:52.328850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.109 [2024-10-09 03:09:52.328858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.109 [2024-10-09 03:09:52.328868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.109 [2024-10-09 03:09:52.328876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.109 [2024-10-09 03:09:52.328887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.109 [2024-10-09 03:09:52.328896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.109 [2024-10-09 03:09:52.328905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.109 [2024-10-09 03:09:52.328914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.109 [2024-10-09 03:09:52.328924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.109 [2024-10-09 03:09:52.328932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.109 [2024-10-09 03:09:52.328942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.109 [2024-10-09 03:09:52.328950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.109 [2024-10-09 03:09:52.328960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.109 [2024-10-09 03:09:52.328968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.109 [2024-10-09 03:09:52.328978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.109 [2024-10-09 03:09:52.328985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.109 [2024-10-09 03:09:52.328995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.109 [2024-10-09 03:09:52.329003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.109 [2024-10-09 03:09:52.329012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.109 [2024-10-09 03:09:52.329020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.109 [2024-10-09 03:09:52.329029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.109 [2024-10-09 03:09:52.329038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.109 [2024-10-09 03:09:52.329048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.109 [2024-10-09 03:09:52.329056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.109 [2024-10-09 03:09:52.329065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.109 [2024-10-09 03:09:52.329079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.109 [2024-10-09 03:09:52.329089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.109 [2024-10-09 03:09:52.329133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.109 [2024-10-09 03:09:52.329144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.109 [2024-10-09 03:09:52.329154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.109 [2024-10-09 03:09:52.329164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.109 [2024-10-09 03:09:52.329186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.109 [2024-10-09 03:09:52.329198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.109 [2024-10-09 03:09:52.329207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.109 [2024-10-09 03:09:52.329217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.109 [2024-10-09 03:09:52.329226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.109 [2024-10-09 03:09:52.329236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.109 [2024-10-09 03:09:52.329248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.109 [2024-10-09 03:09:52.329257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.109 [2024-10-09 03:09:52.329266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.109 [2024-10-09 03:09:52.329276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.109 [2024-10-09 03:09:52.329285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.109 [2024-10-09 03:09:52.329295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.109 [2024-10-09 03:09:52.329303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.109 [2024-10-09 03:09:52.329313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.109 [2024-10-09 03:09:52.329321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.109 [2024-10-09 03:09:52.329332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.109 [2024-10-09 03:09:52.329340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.109 [2024-10-09 03:09:52.329349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.109 [2024-10-09 03:09:52.329357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.109 [2024-10-09 03:09:52.329367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.109 [2024-10-09 03:09:52.329378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.109 [2024-10-09 03:09:52.329388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.109 [2024-10-09 03:09:52.329396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.109 [2024-10-09 03:09:52.329405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.109 [2024-10-09 03:09:52.329413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.109 [2024-10-09 03:09:52.329423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.109 [2024-10-09 03:09:52.329431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.109 [2024-10-09 03:09:52.329440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a586b0 is same with the state(6) to be set 00:07:09.109 [2024-10-09 03:09:52.329563] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a586b0 was disconnected and freed. reset controller. 00:07:09.109 [2024-10-09 03:09:52.329627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a58b20 (9): Bad file descriptor 00:07:09.109 [2024-10-09 03:09:52.330742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controlletask offset: 114560 on job bdev=Nvme0n1 fails 00:07:09.109 00:07:09.109 Latency(us) 00:07:09.109 [2024-10-09T03:09:52.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:09.109 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:09.109 Job: Nvme0n1 ended in about 0.58 seconds with error 00:07:09.109 Verification LBA range: start 0x0 length 0x400 00:07:09.109 Nvme0n1 : 0.58 1549.93 96.87 110.83 0.00 37514.03 1884.16 37653.41 00:07:09.109 [2024-10-09T03:09:52.412Z] =================================================================================================================== 00:07:09.109 [2024-10-09T03:09:52.412Z] Total : 1549.93 96.87 110.83 0.00 37514.03 1884.16 37653.41 00:07:09.109 r 00:07:09.109 [2024-10-09 03:09:52.332657] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:09.109 [2024-10-09 03:09:52.340868] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:10.046 03:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62522 00:07:10.046 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62522) - No such process 00:07:10.046 03:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:10.046 03:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:10.046 03:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:10.046 03:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:10.046 03:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:07:10.046 03:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:07:10.046 03:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:10.046 03:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:10.046 { 00:07:10.046 "params": { 00:07:10.046 "name": "Nvme$subsystem", 00:07:10.046 "trtype": "$TEST_TRANSPORT", 00:07:10.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:10.046 "adrfam": "ipv4", 00:07:10.046 "trsvcid": "$NVMF_PORT", 00:07:10.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:10.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:10.046 "hdgst": ${hdgst:-false}, 00:07:10.046 "ddgst": ${ddgst:-false} 00:07:10.046 }, 00:07:10.046 "method": "bdev_nvme_attach_controller" 00:07:10.046 } 00:07:10.046 EOF 00:07:10.046 )") 00:07:10.046 03:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:07:10.046 03:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:07:10.046 03:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:07:10.046 03:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:10.046 "params": { 00:07:10.046 "name": "Nvme0", 00:07:10.046 "trtype": "tcp", 00:07:10.046 "traddr": "10.0.0.3", 00:07:10.046 "adrfam": "ipv4", 00:07:10.046 "trsvcid": "4420", 00:07:10.046 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:10.046 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:10.046 "hdgst": false, 00:07:10.046 "ddgst": false 00:07:10.046 }, 00:07:10.046 "method": "bdev_nvme_attach_controller" 00:07:10.046 }' 00:07:10.305 [2024-10-09 03:09:53.394881] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:07:10.305 [2024-10-09 03:09:53.395241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62566 ] 00:07:10.305 [2024-10-09 03:09:53.535358] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.564 [2024-10-09 03:09:53.648571] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.564 [2024-10-09 03:09:53.731307] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:10.564 Running I/O for 1 seconds... 00:07:11.941 1536.00 IOPS, 96.00 MiB/s 00:07:11.941 Latency(us) 00:07:11.941 [2024-10-09T03:09:55.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:11.941 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:11.941 Verification LBA range: start 0x0 length 0x400 00:07:11.941 Nvme0n1 : 1.02 1562.40 97.65 0.00 0.00 40209.23 4140.68 36700.16 00:07:11.941 [2024-10-09T03:09:55.244Z] =================================================================================================================== 00:07:11.941 [2024-10-09T03:09:55.244Z] Total : 1562.40 97.65 0.00 0.00 40209.23 4140.68 36700.16 00:07:11.941 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:11.941 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:11.941 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:11.941 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:11.941 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:11.941 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:11.941 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:12.200 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:12.200 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:12.200 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:12.200 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:12.200 rmmod nvme_tcp 00:07:12.200 rmmod nvme_fabrics 00:07:12.200 rmmod nvme_keyring 00:07:12.200 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:12.200 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:12.200 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:12.200 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 62464 ']' 00:07:12.200 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 62464 00:07:12.200 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 62464 ']' 00:07:12.200 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 62464 00:07:12.200 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:12.200 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:12.200 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62464 00:07:12.200 killing process with pid 62464 00:07:12.200 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:12.200 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:12.200 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62464' 00:07:12.200 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 62464 00:07:12.200 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 62464 00:07:12.459 [2024-10-09 03:09:55.607377] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:12.459 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:12.459 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:12.459 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:12.459 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:12.459 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:07:12.459 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:12.459 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:07:12.459 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:12.459 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:12.459 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:12.459 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:12.459 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:12.459 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:12.459 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:12.459 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:12.459 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:12.459 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:12.459 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:12.718 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:12.718 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:12.718 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:12.718 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:12.718 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:12.718 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.718 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:12.718 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.718 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:07:12.718 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:12.718 00:07:12.718 real 0m6.744s 00:07:12.718 user 0m24.701s 00:07:12.718 sys 0m1.774s 00:07:12.718 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.718 ************************************ 00:07:12.718 END TEST nvmf_host_management 00:07:12.718 ************************************ 00:07:12.718 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:12.718 03:09:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:12.718 03:09:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:12.718 03:09:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.718 03:09:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:12.718 ************************************ 00:07:12.718 START TEST nvmf_lvol 00:07:12.718 ************************************ 00:07:12.718 03:09:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:12.978 * Looking for test storage... 00:07:12.978 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:12.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.978 --rc genhtml_branch_coverage=1 00:07:12.978 --rc genhtml_function_coverage=1 00:07:12.978 --rc genhtml_legend=1 00:07:12.978 --rc geninfo_all_blocks=1 00:07:12.978 --rc geninfo_unexecuted_blocks=1 00:07:12.978 00:07:12.978 ' 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:12.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.978 --rc genhtml_branch_coverage=1 00:07:12.978 --rc genhtml_function_coverage=1 00:07:12.978 --rc genhtml_legend=1 00:07:12.978 --rc geninfo_all_blocks=1 00:07:12.978 --rc geninfo_unexecuted_blocks=1 00:07:12.978 00:07:12.978 ' 00:07:12.978 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:12.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.978 --rc genhtml_branch_coverage=1 00:07:12.978 --rc genhtml_function_coverage=1 00:07:12.978 --rc genhtml_legend=1 00:07:12.978 --rc geninfo_all_blocks=1 00:07:12.978 --rc geninfo_unexecuted_blocks=1 00:07:12.978 00:07:12.979 ' 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:12.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.979 --rc genhtml_branch_coverage=1 00:07:12.979 --rc genhtml_function_coverage=1 00:07:12.979 --rc genhtml_legend=1 00:07:12.979 --rc geninfo_all_blocks=1 00:07:12.979 --rc geninfo_unexecuted_blocks=1 00:07:12.979 00:07:12.979 ' 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:12.979 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@458 -- # nvmf_veth_init 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:12.979 Cannot find device "nvmf_init_br" 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:12.979 Cannot find device "nvmf_init_br2" 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:12.979 Cannot find device "nvmf_tgt_br" 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:12.979 Cannot find device "nvmf_tgt_br2" 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:07:12.979 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:13.238 Cannot find device "nvmf_init_br" 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:13.238 Cannot find device "nvmf_init_br2" 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:13.238 Cannot find device "nvmf_tgt_br" 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:13.238 Cannot find device "nvmf_tgt_br2" 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:13.238 Cannot find device "nvmf_br" 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:13.238 Cannot find device "nvmf_init_if" 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:13.238 Cannot find device "nvmf_init_if2" 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:13.238 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:13.238 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:13.238 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:13.498 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:13.498 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:13.498 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:13.498 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:13.498 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:13.498 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:13.498 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:13.498 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:13.498 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:13.498 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:13.498 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:07:13.498 00:07:13.498 --- 10.0.0.3 ping statistics --- 00:07:13.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.498 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:07:13.498 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:13.498 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:13.498 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:07:13.498 00:07:13.498 --- 10.0.0.4 ping statistics --- 00:07:13.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.498 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:07:13.498 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:13.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:13.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:07:13.498 00:07:13.498 --- 10.0.0.1 ping statistics --- 00:07:13.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.498 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:07:13.498 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:13.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:13.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:07:13.498 00:07:13.498 --- 10.0.0.2 ping statistics --- 00:07:13.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.498 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:07:13.498 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:13.498 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # return 0 00:07:13.498 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:13.498 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:13.498 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:13.498 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:13.498 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:13.498 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:13.498 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:13.498 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:13.498 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:13.498 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:13.498 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:13.498 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=62842 00:07:13.498 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:13.498 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 62842 00:07:13.498 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 62842 ']' 00:07:13.498 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.498 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.498 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.498 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.498 03:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:13.498 [2024-10-09 03:09:56.697069] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:07:13.498 [2024-10-09 03:09:56.697165] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.757 [2024-10-09 03:09:56.839238] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:13.757 [2024-10-09 03:09:56.975150] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:13.757 [2024-10-09 03:09:56.975478] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:13.757 [2024-10-09 03:09:56.975700] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:13.757 [2024-10-09 03:09:56.975854] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:13.757 [2024-10-09 03:09:56.975905] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:13.757 [2024-10-09 03:09:56.976740] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.757 [2024-10-09 03:09:56.976887] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.757 [2024-10-09 03:09:56.976894] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.757 [2024-10-09 03:09:57.052410] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:14.697 03:09:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.697 03:09:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:14.697 03:09:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:14.697 03:09:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:14.697 03:09:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:14.697 03:09:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:14.697 03:09:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:14.958 [2024-10-09 03:09:58.138605] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.958 03:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:15.217 03:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:15.217 03:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:15.840 03:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:15.840 03:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:15.840 03:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:16.099 03:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=af10b841-ce42-4446-902e-73e3a179546e 00:07:16.099 03:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u af10b841-ce42-4446-902e-73e3a179546e lvol 20 00:07:16.358 03:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=69abcb24-d379-4852-bfe1-dc56bd5fe038 00:07:16.358 03:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:16.617 03:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 69abcb24-d379-4852-bfe1-dc56bd5fe038 00:07:16.876 03:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:17.135 [2024-10-09 03:10:00.343586] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:17.135 03:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:17.393 03:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62918 00:07:17.393 03:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:17.393 03:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:18.770 03:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 69abcb24-d379-4852-bfe1-dc56bd5fe038 MY_SNAPSHOT 00:07:18.770 03:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=fcfbdf12-d596-4756-8f64-8061f3acea16 00:07:18.770 03:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 69abcb24-d379-4852-bfe1-dc56bd5fe038 30 00:07:19.029 03:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone fcfbdf12-d596-4756-8f64-8061f3acea16 MY_CLONE 00:07:19.288 03:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=865f1381-be13-46a1-a7fe-0cfb57c4612f 00:07:19.288 03:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 865f1381-be13-46a1-a7fe-0cfb57c4612f 00:07:19.856 03:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62918 00:07:28.044 Initializing NVMe Controllers 00:07:28.044 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:07:28.044 Controller IO queue size 128, less than required. 00:07:28.044 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:28.044 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:28.044 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:28.044 Initialization complete. Launching workers. 00:07:28.045 ======================================================== 00:07:28.045 Latency(us) 00:07:28.045 Device Information : IOPS MiB/s Average min max 00:07:28.045 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 7028.50 27.46 18229.19 651.13 105539.40 00:07:28.045 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 7547.90 29.48 16977.94 3336.65 82986.39 00:07:28.045 ======================================================== 00:07:28.045 Total : 14576.40 56.94 17581.27 651.13 105539.40 00:07:28.045 00:07:28.045 03:10:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:28.045 03:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 69abcb24-d379-4852-bfe1-dc56bd5fe038 00:07:28.304 03:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u af10b841-ce42-4446-902e-73e3a179546e 00:07:28.872 03:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:28.872 03:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:28.872 03:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:28.872 03:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:28.872 03:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:28.872 03:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:28.872 03:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:28.872 03:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:28.872 03:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:28.872 rmmod nvme_tcp 00:07:28.872 rmmod nvme_fabrics 00:07:28.872 rmmod nvme_keyring 00:07:28.872 03:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:28.872 03:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:28.872 03:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:28.872 03:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 62842 ']' 00:07:28.872 03:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 62842 00:07:28.872 03:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 62842 ']' 00:07:28.872 03:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 62842 00:07:28.872 03:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:28.872 03:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:28.872 03:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62842 00:07:28.872 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:28.872 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:28.872 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62842' 00:07:28.872 killing process with pid 62842 00:07:28.872 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 62842 00:07:28.872 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 62842 00:07:29.131 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:29.131 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:29.131 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:29.131 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:29.131 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:07:29.131 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:29.132 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:07:29.132 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:29.132 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:29.132 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:29.132 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:29.132 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:29.391 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:29.391 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:29.391 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:29.391 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:29.391 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:29.391 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:29.391 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:29.391 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:29.391 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:29.391 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:29.391 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:29.391 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.391 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.391 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.391 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:07:29.391 00:07:29.391 real 0m16.660s 00:07:29.391 user 1m7.499s 00:07:29.391 sys 0m4.276s 00:07:29.391 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.391 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:29.391 ************************************ 00:07:29.391 END TEST nvmf_lvol 00:07:29.391 ************************************ 00:07:29.391 03:10:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:29.391 03:10:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:29.391 03:10:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.391 03:10:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:29.651 ************************************ 00:07:29.651 START TEST nvmf_lvs_grow 00:07:29.651 ************************************ 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:29.651 * Looking for test storage... 00:07:29.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:29.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.651 --rc genhtml_branch_coverage=1 00:07:29.651 --rc genhtml_function_coverage=1 00:07:29.651 --rc genhtml_legend=1 00:07:29.651 --rc geninfo_all_blocks=1 00:07:29.651 --rc geninfo_unexecuted_blocks=1 00:07:29.651 00:07:29.651 ' 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:29.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.651 --rc genhtml_branch_coverage=1 00:07:29.651 --rc genhtml_function_coverage=1 00:07:29.651 --rc genhtml_legend=1 00:07:29.651 --rc geninfo_all_blocks=1 00:07:29.651 --rc geninfo_unexecuted_blocks=1 00:07:29.651 00:07:29.651 ' 00:07:29.651 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:29.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.651 --rc genhtml_branch_coverage=1 00:07:29.651 --rc genhtml_function_coverage=1 00:07:29.651 --rc genhtml_legend=1 00:07:29.651 --rc geninfo_all_blocks=1 00:07:29.651 --rc geninfo_unexecuted_blocks=1 00:07:29.651 00:07:29.651 ' 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:29.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.652 --rc genhtml_branch_coverage=1 00:07:29.652 --rc genhtml_function_coverage=1 00:07:29.652 --rc genhtml_legend=1 00:07:29.652 --rc geninfo_all_blocks=1 00:07:29.652 --rc geninfo_unexecuted_blocks=1 00:07:29.652 00:07:29.652 ' 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:29.652 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@458 -- # nvmf_veth_init 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:29.652 Cannot find device "nvmf_init_br" 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:29.652 Cannot find device "nvmf_init_br2" 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:29.652 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:29.912 Cannot find device "nvmf_tgt_br" 00:07:29.912 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:07:29.912 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:29.912 Cannot find device "nvmf_tgt_br2" 00:07:29.912 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:07:29.912 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:29.912 Cannot find device "nvmf_init_br" 00:07:29.912 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:07:29.912 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:29.912 Cannot find device "nvmf_init_br2" 00:07:29.912 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:07:29.912 03:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:29.912 Cannot find device "nvmf_tgt_br" 00:07:29.912 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:07:29.912 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:29.912 Cannot find device "nvmf_tgt_br2" 00:07:29.912 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:07:29.912 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:29.912 Cannot find device "nvmf_br" 00:07:29.912 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:07:29.912 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:29.912 Cannot find device "nvmf_init_if" 00:07:29.912 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:07:29.912 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:29.912 Cannot find device "nvmf_init_if2" 00:07:29.912 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:07:29.912 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:29.912 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:29.912 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:07:29.912 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:29.912 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:29.912 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:07:29.912 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:29.912 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:29.912 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:29.912 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:29.912 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:29.912 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:29.912 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:29.912 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:29.912 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:29.912 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:29.912 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:29.912 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:30.171 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:30.171 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:30.171 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:30.171 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:30.171 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:30.171 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:30.171 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:30.171 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:30.171 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:30.171 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:30.171 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:30.171 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:30.171 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:30.171 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:30.171 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:30.171 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:30.171 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:30.171 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:30.171 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:30.172 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:30.172 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:30.172 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:30.172 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:07:30.172 00:07:30.172 --- 10.0.0.3 ping statistics --- 00:07:30.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.172 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:07:30.172 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:30.172 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:30.172 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.108 ms 00:07:30.172 00:07:30.172 --- 10.0.0.4 ping statistics --- 00:07:30.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.172 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:07:30.172 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:30.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:30.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:07:30.172 00:07:30.172 --- 10.0.0.1 ping statistics --- 00:07:30.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.172 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:07:30.172 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:30.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:30.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:07:30.172 00:07:30.172 --- 10.0.0.2 ping statistics --- 00:07:30.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.172 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:07:30.172 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:30.172 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # return 0 00:07:30.172 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:30.172 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:30.172 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:30.172 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:30.172 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:30.172 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:30.172 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:30.172 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:30.172 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:30.172 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:30.172 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:30.172 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=63298 00:07:30.172 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 63298 00:07:30.172 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 63298 ']' 00:07:30.172 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.172 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.172 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.172 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.172 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:30.172 03:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:30.172 [2024-10-09 03:10:13.441717] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:07:30.172 [2024-10-09 03:10:13.441827] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.431 [2024-10-09 03:10:13.582729] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.431 [2024-10-09 03:10:13.705612] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:30.431 [2024-10-09 03:10:13.705677] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:30.431 [2024-10-09 03:10:13.705691] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:30.431 [2024-10-09 03:10:13.705702] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:30.431 [2024-10-09 03:10:13.705712] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:30.431 [2024-10-09 03:10:13.706272] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.689 [2024-10-09 03:10:13.784344] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:31.257 03:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.257 03:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:07:31.257 03:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:31.257 03:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:31.257 03:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:31.257 03:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:31.257 03:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:31.517 [2024-10-09 03:10:14.776806] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.517 03:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:31.517 03:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:31.517 03:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.517 03:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:31.517 ************************************ 00:07:31.517 START TEST lvs_grow_clean 00:07:31.517 ************************************ 00:07:31.517 03:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:07:31.517 03:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:31.517 03:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:31.517 03:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:31.517 03:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:31.517 03:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:31.517 03:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:31.517 03:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:31.517 03:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:31.517 03:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:32.084 03:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:32.084 03:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:32.343 03:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e2d91836-5523-436f-a012-9dfc73eaa7d5 00:07:32.343 03:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2d91836-5523-436f-a012-9dfc73eaa7d5 00:07:32.343 03:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:32.603 03:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:32.603 03:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:32.603 03:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e2d91836-5523-436f-a012-9dfc73eaa7d5 lvol 150 00:07:32.861 03:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=8484a680-2e7f-4d77-8700-f171ff6f323a 00:07:32.861 03:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:32.861 03:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:33.119 [2024-10-09 03:10:16.186972] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:33.119 [2024-10-09 03:10:16.187086] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:33.119 true 00:07:33.119 03:10:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:33.119 03:10:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2d91836-5523-436f-a012-9dfc73eaa7d5 00:07:33.378 03:10:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:33.378 03:10:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:33.637 03:10:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8484a680-2e7f-4d77-8700-f171ff6f323a 00:07:33.896 03:10:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:33.896 [2024-10-09 03:10:17.175507] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:33.896 03:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:34.155 03:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63386 00:07:34.155 03:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:34.155 03:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:34.155 03:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63386 /var/tmp/bdevperf.sock 00:07:34.155 03:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 63386 ']' 00:07:34.155 03:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:34.155 03:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:34.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:34.155 03:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:34.155 03:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:34.155 03:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:34.414 [2024-10-09 03:10:17.491783] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:07:34.414 [2024-10-09 03:10:17.491896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63386 ] 00:07:34.414 [2024-10-09 03:10:17.628165] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.673 [2024-10-09 03:10:17.719468] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.673 [2024-10-09 03:10:17.778432] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:35.242 03:10:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:35.242 03:10:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:07:35.242 03:10:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:35.501 Nvme0n1 00:07:35.501 03:10:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:36.092 [ 00:07:36.092 { 00:07:36.092 "name": "Nvme0n1", 00:07:36.092 "aliases": [ 00:07:36.092 "8484a680-2e7f-4d77-8700-f171ff6f323a" 00:07:36.092 ], 00:07:36.092 "product_name": "NVMe disk", 00:07:36.092 "block_size": 4096, 00:07:36.092 "num_blocks": 38912, 00:07:36.092 "uuid": "8484a680-2e7f-4d77-8700-f171ff6f323a", 00:07:36.092 "numa_id": -1, 00:07:36.092 "assigned_rate_limits": { 00:07:36.092 "rw_ios_per_sec": 0, 00:07:36.092 "rw_mbytes_per_sec": 0, 00:07:36.092 "r_mbytes_per_sec": 0, 00:07:36.092 "w_mbytes_per_sec": 0 00:07:36.092 }, 00:07:36.092 "claimed": false, 00:07:36.092 "zoned": false, 00:07:36.092 "supported_io_types": { 00:07:36.092 "read": true, 00:07:36.092 "write": true, 00:07:36.092 "unmap": true, 00:07:36.092 "flush": true, 00:07:36.092 "reset": true, 00:07:36.092 "nvme_admin": true, 00:07:36.092 "nvme_io": true, 00:07:36.092 "nvme_io_md": false, 00:07:36.092 "write_zeroes": true, 00:07:36.092 "zcopy": false, 00:07:36.092 "get_zone_info": false, 00:07:36.092 "zone_management": false, 00:07:36.092 "zone_append": false, 00:07:36.092 "compare": true, 00:07:36.092 "compare_and_write": true, 00:07:36.092 "abort": true, 00:07:36.092 "seek_hole": false, 00:07:36.092 "seek_data": false, 00:07:36.092 "copy": true, 00:07:36.092 "nvme_iov_md": false 00:07:36.092 }, 00:07:36.092 "memory_domains": [ 00:07:36.092 { 00:07:36.092 "dma_device_id": "system", 00:07:36.092 "dma_device_type": 1 00:07:36.092 } 00:07:36.092 ], 00:07:36.092 "driver_specific": { 00:07:36.092 "nvme": [ 00:07:36.092 { 00:07:36.092 "trid": { 00:07:36.092 "trtype": "TCP", 00:07:36.092 "adrfam": "IPv4", 00:07:36.092 "traddr": "10.0.0.3", 00:07:36.092 "trsvcid": "4420", 00:07:36.092 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:36.092 }, 00:07:36.092 "ctrlr_data": { 00:07:36.092 "cntlid": 1, 00:07:36.092 "vendor_id": "0x8086", 00:07:36.092 "model_number": "SPDK bdev Controller", 00:07:36.092 "serial_number": "SPDK0", 00:07:36.092 "firmware_revision": "25.01", 00:07:36.092 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:36.092 "oacs": { 00:07:36.092 "security": 0, 00:07:36.092 "format": 0, 00:07:36.092 "firmware": 0, 00:07:36.092 "ns_manage": 0 00:07:36.092 }, 00:07:36.092 "multi_ctrlr": true, 00:07:36.092 "ana_reporting": false 00:07:36.092 }, 00:07:36.092 "vs": { 00:07:36.092 "nvme_version": "1.3" 00:07:36.092 }, 00:07:36.092 "ns_data": { 00:07:36.092 "id": 1, 00:07:36.092 "can_share": true 00:07:36.092 } 00:07:36.092 } 00:07:36.092 ], 00:07:36.092 "mp_policy": "active_passive" 00:07:36.092 } 00:07:36.092 } 00:07:36.092 ] 00:07:36.092 03:10:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63415 00:07:36.092 03:10:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:36.092 03:10:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:36.092 Running I/O for 10 seconds... 00:07:37.038 Latency(us) 00:07:37.038 [2024-10-09T03:10:20.341Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:37.038 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.039 Nvme0n1 : 1.00 7571.00 29.57 0.00 0.00 0.00 0.00 0.00 00:07:37.039 [2024-10-09T03:10:20.342Z] =================================================================================================================== 00:07:37.039 [2024-10-09T03:10:20.342Z] Total : 7571.00 29.57 0.00 0.00 0.00 0.00 0.00 00:07:37.039 00:07:37.976 03:10:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e2d91836-5523-436f-a012-9dfc73eaa7d5 00:07:37.976 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.976 Nvme0n1 : 2.00 7532.00 29.42 0.00 0.00 0.00 0.00 0.00 00:07:37.976 [2024-10-09T03:10:21.279Z] =================================================================================================================== 00:07:37.976 [2024-10-09T03:10:21.279Z] Total : 7532.00 29.42 0.00 0.00 0.00 0.00 0.00 00:07:37.976 00:07:38.235 true 00:07:38.235 03:10:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2d91836-5523-436f-a012-9dfc73eaa7d5 00:07:38.235 03:10:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:38.495 03:10:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:38.495 03:10:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:38.495 03:10:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63415 00:07:39.063 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.063 Nvme0n1 : 3.00 7417.33 28.97 0.00 0.00 0.00 0.00 0.00 00:07:39.063 [2024-10-09T03:10:22.366Z] =================================================================================================================== 00:07:39.063 [2024-10-09T03:10:22.366Z] Total : 7417.33 28.97 0.00 0.00 0.00 0.00 0.00 00:07:39.063 00:07:40.000 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.000 Nvme0n1 : 4.00 7404.50 28.92 0.00 0.00 0.00 0.00 0.00 00:07:40.000 [2024-10-09T03:10:23.303Z] =================================================================================================================== 00:07:40.000 [2024-10-09T03:10:23.303Z] Total : 7404.50 28.92 0.00 0.00 0.00 0.00 0.00 00:07:40.000 00:07:40.938 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.938 Nvme0n1 : 5.00 7371.40 28.79 0.00 0.00 0.00 0.00 0.00 00:07:40.938 [2024-10-09T03:10:24.241Z] =================================================================================================================== 00:07:40.938 [2024-10-09T03:10:24.241Z] Total : 7371.40 28.79 0.00 0.00 0.00 0.00 0.00 00:07:40.938 00:07:42.316 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.316 Nvme0n1 : 6.00 7349.33 28.71 0.00 0.00 0.00 0.00 0.00 00:07:42.316 [2024-10-09T03:10:25.619Z] =================================================================================================================== 00:07:42.316 [2024-10-09T03:10:25.619Z] Total : 7349.33 28.71 0.00 0.00 0.00 0.00 0.00 00:07:42.316 00:07:43.253 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.253 Nvme0n1 : 7.00 7315.43 28.58 0.00 0.00 0.00 0.00 0.00 00:07:43.253 [2024-10-09T03:10:26.556Z] =================================================================================================================== 00:07:43.253 [2024-10-09T03:10:26.556Z] Total : 7315.43 28.58 0.00 0.00 0.00 0.00 0.00 00:07:43.253 00:07:44.190 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.190 Nvme0n1 : 8.00 7305.88 28.54 0.00 0.00 0.00 0.00 0.00 00:07:44.190 [2024-10-09T03:10:27.493Z] =================================================================================================================== 00:07:44.190 [2024-10-09T03:10:27.493Z] Total : 7305.88 28.54 0.00 0.00 0.00 0.00 0.00 00:07:44.190 00:07:45.149 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.149 Nvme0n1 : 9.00 7284.33 28.45 0.00 0.00 0.00 0.00 0.00 00:07:45.149 [2024-10-09T03:10:28.452Z] =================================================================================================================== 00:07:45.149 [2024-10-09T03:10:28.452Z] Total : 7284.33 28.45 0.00 0.00 0.00 0.00 0.00 00:07:45.149 00:07:46.086 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.086 Nvme0n1 : 10.00 7241.70 28.29 0.00 0.00 0.00 0.00 0.00 00:07:46.086 [2024-10-09T03:10:29.389Z] =================================================================================================================== 00:07:46.086 [2024-10-09T03:10:29.389Z] Total : 7241.70 28.29 0.00 0.00 0.00 0.00 0.00 00:07:46.086 00:07:46.086 00:07:46.086 Latency(us) 00:07:46.086 [2024-10-09T03:10:29.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.086 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.086 Nvme0n1 : 10.01 7248.98 28.32 0.00 0.00 17651.07 14000.87 61484.68 00:07:46.086 [2024-10-09T03:10:29.389Z] =================================================================================================================== 00:07:46.086 [2024-10-09T03:10:29.389Z] Total : 7248.98 28.32 0.00 0.00 17651.07 14000.87 61484.68 00:07:46.086 { 00:07:46.086 "results": [ 00:07:46.086 { 00:07:46.086 "job": "Nvme0n1", 00:07:46.086 "core_mask": "0x2", 00:07:46.086 "workload": "randwrite", 00:07:46.086 "status": "finished", 00:07:46.086 "queue_depth": 128, 00:07:46.086 "io_size": 4096, 00:07:46.086 "runtime": 10.007613, 00:07:46.086 "iops": 7248.981350497866, 00:07:46.086 "mibps": 28.316333400382288, 00:07:46.086 "io_failed": 0, 00:07:46.086 "io_timeout": 0, 00:07:46.086 "avg_latency_us": 17651.074639415034, 00:07:46.086 "min_latency_us": 14000.872727272726, 00:07:46.086 "max_latency_us": 61484.68363636364 00:07:46.086 } 00:07:46.086 ], 00:07:46.086 "core_count": 1 00:07:46.086 } 00:07:46.086 03:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63386 00:07:46.086 03:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 63386 ']' 00:07:46.086 03:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 63386 00:07:46.086 03:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:07:46.086 03:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:46.086 03:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63386 00:07:46.086 03:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:46.086 killing process with pid 63386 00:07:46.086 03:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:46.086 03:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63386' 00:07:46.086 03:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 63386 00:07:46.086 Received shutdown signal, test time was about 10.000000 seconds 00:07:46.086 00:07:46.086 Latency(us) 00:07:46.086 [2024-10-09T03:10:29.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.086 [2024-10-09T03:10:29.389Z] =================================================================================================================== 00:07:46.086 [2024-10-09T03:10:29.389Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:46.086 03:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 63386 00:07:46.346 03:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:46.604 03:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:46.862 03:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2d91836-5523-436f-a012-9dfc73eaa7d5 00:07:46.862 03:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:47.120 03:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:47.120 03:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:47.120 03:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:47.379 [2024-10-09 03:10:30.595577] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:47.379 03:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2d91836-5523-436f-a012-9dfc73eaa7d5 00:07:47.379 03:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:47.379 03:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2d91836-5523-436f-a012-9dfc73eaa7d5 00:07:47.379 03:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:47.379 03:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.379 03:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:47.379 03:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.379 03:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:47.379 03:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.379 03:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:47.379 03:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:47.379 03:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2d91836-5523-436f-a012-9dfc73eaa7d5 00:07:47.638 request: 00:07:47.638 { 00:07:47.638 "uuid": "e2d91836-5523-436f-a012-9dfc73eaa7d5", 00:07:47.638 "method": "bdev_lvol_get_lvstores", 00:07:47.638 "req_id": 1 00:07:47.638 } 00:07:47.638 Got JSON-RPC error response 00:07:47.638 response: 00:07:47.638 { 00:07:47.638 "code": -19, 00:07:47.638 "message": "No such device" 00:07:47.638 } 00:07:47.638 03:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:47.638 03:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:47.638 03:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:47.638 03:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:47.638 03:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:47.898 aio_bdev 00:07:47.898 03:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8484a680-2e7f-4d77-8700-f171ff6f323a 00:07:47.898 03:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=8484a680-2e7f-4d77-8700-f171ff6f323a 00:07:47.898 03:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:47.898 03:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:07:47.898 03:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:47.898 03:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:47.898 03:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:48.157 03:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8484a680-2e7f-4d77-8700-f171ff6f323a -t 2000 00:07:48.416 [ 00:07:48.416 { 00:07:48.416 "name": "8484a680-2e7f-4d77-8700-f171ff6f323a", 00:07:48.416 "aliases": [ 00:07:48.416 "lvs/lvol" 00:07:48.416 ], 00:07:48.416 "product_name": "Logical Volume", 00:07:48.416 "block_size": 4096, 00:07:48.416 "num_blocks": 38912, 00:07:48.416 "uuid": "8484a680-2e7f-4d77-8700-f171ff6f323a", 00:07:48.416 "assigned_rate_limits": { 00:07:48.416 "rw_ios_per_sec": 0, 00:07:48.416 "rw_mbytes_per_sec": 0, 00:07:48.416 "r_mbytes_per_sec": 0, 00:07:48.416 "w_mbytes_per_sec": 0 00:07:48.416 }, 00:07:48.416 "claimed": false, 00:07:48.416 "zoned": false, 00:07:48.416 "supported_io_types": { 00:07:48.416 "read": true, 00:07:48.416 "write": true, 00:07:48.416 "unmap": true, 00:07:48.416 "flush": false, 00:07:48.416 "reset": true, 00:07:48.416 "nvme_admin": false, 00:07:48.416 "nvme_io": false, 00:07:48.416 "nvme_io_md": false, 00:07:48.416 "write_zeroes": true, 00:07:48.416 "zcopy": false, 00:07:48.416 "get_zone_info": false, 00:07:48.416 "zone_management": false, 00:07:48.416 "zone_append": false, 00:07:48.416 "compare": false, 00:07:48.416 "compare_and_write": false, 00:07:48.416 "abort": false, 00:07:48.416 "seek_hole": true, 00:07:48.416 "seek_data": true, 00:07:48.416 "copy": false, 00:07:48.416 "nvme_iov_md": false 00:07:48.416 }, 00:07:48.416 "driver_specific": { 00:07:48.416 "lvol": { 00:07:48.416 "lvol_store_uuid": "e2d91836-5523-436f-a012-9dfc73eaa7d5", 00:07:48.416 "base_bdev": "aio_bdev", 00:07:48.416 "thin_provision": false, 00:07:48.416 "num_allocated_clusters": 38, 00:07:48.416 "snapshot": false, 00:07:48.416 "clone": false, 00:07:48.416 "esnap_clone": false 00:07:48.416 } 00:07:48.416 } 00:07:48.416 } 00:07:48.416 ] 00:07:48.416 03:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:07:48.416 03:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2d91836-5523-436f-a012-9dfc73eaa7d5 00:07:48.416 03:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:48.675 03:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:48.675 03:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2d91836-5523-436f-a012-9dfc73eaa7d5 00:07:48.675 03:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:48.934 03:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:48.934 03:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8484a680-2e7f-4d77-8700-f171ff6f323a 00:07:49.501 03:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e2d91836-5523-436f-a012-9dfc73eaa7d5 00:07:49.501 03:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:49.760 03:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:50.328 00:07:50.328 real 0m18.585s 00:07:50.328 user 0m17.301s 00:07:50.328 sys 0m2.744s 00:07:50.328 03:10:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.328 03:10:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:50.328 ************************************ 00:07:50.328 END TEST lvs_grow_clean 00:07:50.328 ************************************ 00:07:50.328 03:10:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:50.328 03:10:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:50.328 03:10:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.328 03:10:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:50.328 ************************************ 00:07:50.328 START TEST lvs_grow_dirty 00:07:50.328 ************************************ 00:07:50.328 03:10:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:07:50.328 03:10:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:50.328 03:10:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:50.328 03:10:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:50.328 03:10:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:50.328 03:10:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:50.328 03:10:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:50.328 03:10:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:50.328 03:10:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:50.328 03:10:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:50.587 03:10:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:50.587 03:10:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:50.845 03:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=14eb72fb-f03b-4b35-b1bb-052861e0324f 00:07:50.845 03:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14eb72fb-f03b-4b35-b1bb-052861e0324f 00:07:50.845 03:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:51.104 03:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:51.104 03:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:51.104 03:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 14eb72fb-f03b-4b35-b1bb-052861e0324f lvol 150 00:07:51.362 03:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=24445aee-0d45-46a2-8271-d460a3924900 00:07:51.362 03:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:51.362 03:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:51.620 [2024-10-09 03:10:34.916918] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:51.620 [2024-10-09 03:10:34.917259] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:51.879 true 00:07:51.879 03:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14eb72fb-f03b-4b35-b1bb-052861e0324f 00:07:51.879 03:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:52.137 03:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:52.137 03:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:52.137 03:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 24445aee-0d45-46a2-8271-d460a3924900 00:07:52.396 03:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:52.655 [2024-10-09 03:10:35.929530] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:52.655 03:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:52.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:52.943 03:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63664 00:07:52.943 03:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:52.943 03:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:52.943 03:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63664 /var/tmp/bdevperf.sock 00:07:52.943 03:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 63664 ']' 00:07:52.943 03:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:52.943 03:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:52.943 03:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:52.943 03:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:52.943 03:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:53.209 [2024-10-09 03:10:36.282008] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:07:53.209 [2024-10-09 03:10:36.282295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63664 ] 00:07:53.209 [2024-10-09 03:10:36.417828] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.209 [2024-10-09 03:10:36.509748] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.468 [2024-10-09 03:10:36.567346] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.036 03:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:54.036 03:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:54.036 03:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:54.295 Nvme0n1 00:07:54.295 03:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:54.554 [ 00:07:54.554 { 00:07:54.554 "name": "Nvme0n1", 00:07:54.554 "aliases": [ 00:07:54.554 "24445aee-0d45-46a2-8271-d460a3924900" 00:07:54.554 ], 00:07:54.554 "product_name": "NVMe disk", 00:07:54.554 "block_size": 4096, 00:07:54.554 "num_blocks": 38912, 00:07:54.554 "uuid": "24445aee-0d45-46a2-8271-d460a3924900", 00:07:54.555 "numa_id": -1, 00:07:54.555 "assigned_rate_limits": { 00:07:54.555 "rw_ios_per_sec": 0, 00:07:54.555 "rw_mbytes_per_sec": 0, 00:07:54.555 "r_mbytes_per_sec": 0, 00:07:54.555 "w_mbytes_per_sec": 0 00:07:54.555 }, 00:07:54.555 "claimed": false, 00:07:54.555 "zoned": false, 00:07:54.555 "supported_io_types": { 00:07:54.555 "read": true, 00:07:54.555 "write": true, 00:07:54.555 "unmap": true, 00:07:54.555 "flush": true, 00:07:54.555 "reset": true, 00:07:54.555 "nvme_admin": true, 00:07:54.555 "nvme_io": true, 00:07:54.555 "nvme_io_md": false, 00:07:54.555 "write_zeroes": true, 00:07:54.555 "zcopy": false, 00:07:54.555 "get_zone_info": false, 00:07:54.555 "zone_management": false, 00:07:54.555 "zone_append": false, 00:07:54.555 "compare": true, 00:07:54.555 "compare_and_write": true, 00:07:54.555 "abort": true, 00:07:54.555 "seek_hole": false, 00:07:54.555 "seek_data": false, 00:07:54.555 "copy": true, 00:07:54.555 "nvme_iov_md": false 00:07:54.555 }, 00:07:54.555 "memory_domains": [ 00:07:54.555 { 00:07:54.555 "dma_device_id": "system", 00:07:54.555 "dma_device_type": 1 00:07:54.555 } 00:07:54.555 ], 00:07:54.555 "driver_specific": { 00:07:54.555 "nvme": [ 00:07:54.555 { 00:07:54.555 "trid": { 00:07:54.555 "trtype": "TCP", 00:07:54.555 "adrfam": "IPv4", 00:07:54.555 "traddr": "10.0.0.3", 00:07:54.555 "trsvcid": "4420", 00:07:54.555 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:54.555 }, 00:07:54.555 "ctrlr_data": { 00:07:54.555 "cntlid": 1, 00:07:54.555 "vendor_id": "0x8086", 00:07:54.555 "model_number": "SPDK bdev Controller", 00:07:54.555 "serial_number": "SPDK0", 00:07:54.555 "firmware_revision": "25.01", 00:07:54.555 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:54.555 "oacs": { 00:07:54.555 "security": 0, 00:07:54.555 "format": 0, 00:07:54.555 "firmware": 0, 00:07:54.555 "ns_manage": 0 00:07:54.555 }, 00:07:54.555 "multi_ctrlr": true, 00:07:54.555 "ana_reporting": false 00:07:54.555 }, 00:07:54.555 "vs": { 00:07:54.555 "nvme_version": "1.3" 00:07:54.555 }, 00:07:54.555 "ns_data": { 00:07:54.555 "id": 1, 00:07:54.555 "can_share": true 00:07:54.555 } 00:07:54.555 } 00:07:54.555 ], 00:07:54.555 "mp_policy": "active_passive" 00:07:54.555 } 00:07:54.555 } 00:07:54.555 ] 00:07:54.814 03:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63688 00:07:54.814 03:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:54.814 03:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:54.814 Running I/O for 10 seconds... 00:07:55.751 Latency(us) 00:07:55.751 [2024-10-09T03:10:39.054Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.751 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.751 Nvme0n1 : 1.00 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:07:55.751 [2024-10-09T03:10:39.054Z] =================================================================================================================== 00:07:55.751 [2024-10-09T03:10:39.054Z] Total : 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:07:55.751 00:07:56.688 03:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 14eb72fb-f03b-4b35-b1bb-052861e0324f 00:07:56.946 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.946 Nvme0n1 : 2.00 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:07:56.946 [2024-10-09T03:10:40.249Z] =================================================================================================================== 00:07:56.946 [2024-10-09T03:10:40.249Z] Total : 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:07:56.946 00:07:56.946 true 00:07:56.946 03:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:56.946 03:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14eb72fb-f03b-4b35-b1bb-052861e0324f 00:07:57.205 03:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:57.205 03:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:57.205 03:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63688 00:07:57.772 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.772 Nvme0n1 : 3.00 7535.33 29.43 0.00 0.00 0.00 0.00 0.00 00:07:57.772 [2024-10-09T03:10:41.075Z] =================================================================================================================== 00:07:57.772 [2024-10-09T03:10:41.075Z] Total : 7535.33 29.43 0.00 0.00 0.00 0.00 0.00 00:07:57.772 00:07:58.708 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.708 Nvme0n1 : 4.00 7461.25 29.15 0.00 0.00 0.00 0.00 0.00 00:07:58.708 [2024-10-09T03:10:42.011Z] =================================================================================================================== 00:07:58.708 [2024-10-09T03:10:42.011Z] Total : 7461.25 29.15 0.00 0.00 0.00 0.00 0.00 00:07:58.708 00:08:00.085 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.085 Nvme0n1 : 5.00 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:08:00.085 [2024-10-09T03:10:43.388Z] =================================================================================================================== 00:08:00.085 [2024-10-09T03:10:43.388Z] Total : 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:08:00.085 00:08:01.022 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.022 Nvme0n1 : 6.00 7343.83 28.69 0.00 0.00 0.00 0.00 0.00 00:08:01.022 [2024-10-09T03:10:44.325Z] =================================================================================================================== 00:08:01.022 [2024-10-09T03:10:44.325Z] Total : 7343.83 28.69 0.00 0.00 0.00 0.00 0.00 00:08:01.022 00:08:02.023 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.023 Nvme0n1 : 7.00 7383.29 28.84 0.00 0.00 0.00 0.00 0.00 00:08:02.023 [2024-10-09T03:10:45.326Z] =================================================================================================================== 00:08:02.023 [2024-10-09T03:10:45.326Z] Total : 7383.29 28.84 0.00 0.00 0.00 0.00 0.00 00:08:02.023 00:08:02.960 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.960 Nvme0n1 : 8.00 7397.00 28.89 0.00 0.00 0.00 0.00 0.00 00:08:02.960 [2024-10-09T03:10:46.263Z] =================================================================================================================== 00:08:02.960 [2024-10-09T03:10:46.263Z] Total : 7397.00 28.89 0.00 0.00 0.00 0.00 0.00 00:08:02.960 00:08:03.897 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.897 Nvme0n1 : 9.00 7407.67 28.94 0.00 0.00 0.00 0.00 0.00 00:08:03.897 [2024-10-09T03:10:47.200Z] =================================================================================================================== 00:08:03.897 [2024-10-09T03:10:47.200Z] Total : 7407.67 28.94 0.00 0.00 0.00 0.00 0.00 00:08:03.897 00:08:04.833 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.833 Nvme0n1 : 10.00 7441.60 29.07 0.00 0.00 0.00 0.00 0.00 00:08:04.833 [2024-10-09T03:10:48.136Z] =================================================================================================================== 00:08:04.833 [2024-10-09T03:10:48.136Z] Total : 7441.60 29.07 0.00 0.00 0.00 0.00 0.00 00:08:04.833 00:08:04.833 00:08:04.833 Latency(us) 00:08:04.833 [2024-10-09T03:10:48.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.833 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.833 Nvme0n1 : 10.01 7443.64 29.08 0.00 0.00 17190.73 8757.99 50283.99 00:08:04.833 [2024-10-09T03:10:48.136Z] =================================================================================================================== 00:08:04.833 [2024-10-09T03:10:48.136Z] Total : 7443.64 29.08 0.00 0.00 17190.73 8757.99 50283.99 00:08:04.833 { 00:08:04.833 "results": [ 00:08:04.833 { 00:08:04.833 "job": "Nvme0n1", 00:08:04.833 "core_mask": "0x2", 00:08:04.833 "workload": "randwrite", 00:08:04.833 "status": "finished", 00:08:04.833 "queue_depth": 128, 00:08:04.833 "io_size": 4096, 00:08:04.833 "runtime": 10.014458, 00:08:04.833 "iops": 7443.637988196665, 00:08:04.833 "mibps": 29.076710891393223, 00:08:04.833 "io_failed": 0, 00:08:04.833 "io_timeout": 0, 00:08:04.833 "avg_latency_us": 17190.729934242616, 00:08:04.833 "min_latency_us": 8757.992727272727, 00:08:04.833 "max_latency_us": 50283.98545454545 00:08:04.833 } 00:08:04.833 ], 00:08:04.833 "core_count": 1 00:08:04.833 } 00:08:04.833 03:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63664 00:08:04.833 03:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 63664 ']' 00:08:04.833 03:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 63664 00:08:04.833 03:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:04.833 03:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:04.833 03:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63664 00:08:04.833 killing process with pid 63664 00:08:04.833 Received shutdown signal, test time was about 10.000000 seconds 00:08:04.833 00:08:04.833 Latency(us) 00:08:04.833 [2024-10-09T03:10:48.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.833 [2024-10-09T03:10:48.136Z] =================================================================================================================== 00:08:04.833 [2024-10-09T03:10:48.136Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:04.833 03:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:04.833 03:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:04.833 03:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63664' 00:08:04.833 03:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 63664 00:08:04.833 03:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 63664 00:08:05.092 03:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:05.351 03:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:05.611 03:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14eb72fb-f03b-4b35-b1bb-052861e0324f 00:08:05.611 03:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:05.870 03:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:05.870 03:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:05.870 03:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63298 00:08:05.870 03:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63298 00:08:05.870 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63298 Killed "${NVMF_APP[@]}" "$@" 00:08:05.870 03:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:05.870 03:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:05.870 03:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:05.870 03:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:05.870 03:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:05.870 03:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=63826 00:08:05.870 03:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:05.870 03:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 63826 00:08:05.870 03:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 63826 ']' 00:08:05.870 03:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.870 03:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:05.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.870 03:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.870 03:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:05.870 03:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:05.870 [2024-10-09 03:10:49.157846] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:08:05.870 [2024-10-09 03:10:49.157964] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.129 [2024-10-09 03:10:49.301075] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.129 [2024-10-09 03:10:49.420979] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.129 [2024-10-09 03:10:49.421058] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.129 [2024-10-09 03:10:49.421070] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.129 [2024-10-09 03:10:49.421077] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.129 [2024-10-09 03:10:49.421083] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.129 [2024-10-09 03:10:49.421478] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.388 [2024-10-09 03:10:49.492937] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.955 03:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:06.955 03:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:06.955 03:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:06.955 03:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:06.955 03:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:06.955 03:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.955 03:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:07.214 [2024-10-09 03:10:50.390178] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:07.214 [2024-10-09 03:10:50.391180] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:07.214 [2024-10-09 03:10:50.391536] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:07.214 03:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:07.214 03:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 24445aee-0d45-46a2-8271-d460a3924900 00:08:07.214 03:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=24445aee-0d45-46a2-8271-d460a3924900 00:08:07.214 03:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:07.214 03:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:07.214 03:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:07.214 03:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:07.214 03:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:07.473 03:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 24445aee-0d45-46a2-8271-d460a3924900 -t 2000 00:08:07.732 [ 00:08:07.732 { 00:08:07.732 "name": "24445aee-0d45-46a2-8271-d460a3924900", 00:08:07.732 "aliases": [ 00:08:07.732 "lvs/lvol" 00:08:07.732 ], 00:08:07.732 "product_name": "Logical Volume", 00:08:07.732 "block_size": 4096, 00:08:07.732 "num_blocks": 38912, 00:08:07.732 "uuid": "24445aee-0d45-46a2-8271-d460a3924900", 00:08:07.732 "assigned_rate_limits": { 00:08:07.732 "rw_ios_per_sec": 0, 00:08:07.732 "rw_mbytes_per_sec": 0, 00:08:07.732 "r_mbytes_per_sec": 0, 00:08:07.732 "w_mbytes_per_sec": 0 00:08:07.732 }, 00:08:07.732 "claimed": false, 00:08:07.732 "zoned": false, 00:08:07.732 "supported_io_types": { 00:08:07.732 "read": true, 00:08:07.732 "write": true, 00:08:07.732 "unmap": true, 00:08:07.732 "flush": false, 00:08:07.732 "reset": true, 00:08:07.732 "nvme_admin": false, 00:08:07.732 "nvme_io": false, 00:08:07.732 "nvme_io_md": false, 00:08:07.732 "write_zeroes": true, 00:08:07.732 "zcopy": false, 00:08:07.732 "get_zone_info": false, 00:08:07.732 "zone_management": false, 00:08:07.732 "zone_append": false, 00:08:07.732 "compare": false, 00:08:07.732 "compare_and_write": false, 00:08:07.732 "abort": false, 00:08:07.732 "seek_hole": true, 00:08:07.732 "seek_data": true, 00:08:07.732 "copy": false, 00:08:07.732 "nvme_iov_md": false 00:08:07.732 }, 00:08:07.732 "driver_specific": { 00:08:07.732 "lvol": { 00:08:07.732 "lvol_store_uuid": "14eb72fb-f03b-4b35-b1bb-052861e0324f", 00:08:07.732 "base_bdev": "aio_bdev", 00:08:07.732 "thin_provision": false, 00:08:07.732 "num_allocated_clusters": 38, 00:08:07.732 "snapshot": false, 00:08:07.732 "clone": false, 00:08:07.732 "esnap_clone": false 00:08:07.732 } 00:08:07.732 } 00:08:07.732 } 00:08:07.732 ] 00:08:07.732 03:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:07.732 03:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14eb72fb-f03b-4b35-b1bb-052861e0324f 00:08:07.732 03:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:07.991 03:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:07.991 03:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14eb72fb-f03b-4b35-b1bb-052861e0324f 00:08:07.991 03:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:08.250 03:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:08.250 03:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:08.509 [2024-10-09 03:10:51.675695] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:08.509 03:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14eb72fb-f03b-4b35-b1bb-052861e0324f 00:08:08.509 03:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:08.509 03:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14eb72fb-f03b-4b35-b1bb-052861e0324f 00:08:08.509 03:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:08.509 03:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.509 03:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:08.509 03:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.509 03:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:08.509 03:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.509 03:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:08.509 03:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:08.509 03:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14eb72fb-f03b-4b35-b1bb-052861e0324f 00:08:08.768 request: 00:08:08.768 { 00:08:08.768 "uuid": "14eb72fb-f03b-4b35-b1bb-052861e0324f", 00:08:08.768 "method": "bdev_lvol_get_lvstores", 00:08:08.768 "req_id": 1 00:08:08.768 } 00:08:08.768 Got JSON-RPC error response 00:08:08.768 response: 00:08:08.768 { 00:08:08.768 "code": -19, 00:08:08.768 "message": "No such device" 00:08:08.768 } 00:08:08.768 03:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:08.768 03:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:08.768 03:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:08.768 03:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:08.768 03:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:09.026 aio_bdev 00:08:09.026 03:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 24445aee-0d45-46a2-8271-d460a3924900 00:08:09.026 03:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=24445aee-0d45-46a2-8271-d460a3924900 00:08:09.026 03:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:09.027 03:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:09.027 03:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:09.027 03:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:09.027 03:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:09.285 03:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 24445aee-0d45-46a2-8271-d460a3924900 -t 2000 00:08:09.545 [ 00:08:09.545 { 00:08:09.545 "name": "24445aee-0d45-46a2-8271-d460a3924900", 00:08:09.545 "aliases": [ 00:08:09.545 "lvs/lvol" 00:08:09.545 ], 00:08:09.545 "product_name": "Logical Volume", 00:08:09.545 "block_size": 4096, 00:08:09.545 "num_blocks": 38912, 00:08:09.545 "uuid": "24445aee-0d45-46a2-8271-d460a3924900", 00:08:09.545 "assigned_rate_limits": { 00:08:09.545 "rw_ios_per_sec": 0, 00:08:09.545 "rw_mbytes_per_sec": 0, 00:08:09.545 "r_mbytes_per_sec": 0, 00:08:09.545 "w_mbytes_per_sec": 0 00:08:09.545 }, 00:08:09.545 "claimed": false, 00:08:09.545 "zoned": false, 00:08:09.545 "supported_io_types": { 00:08:09.545 "read": true, 00:08:09.545 "write": true, 00:08:09.545 "unmap": true, 00:08:09.545 "flush": false, 00:08:09.545 "reset": true, 00:08:09.545 "nvme_admin": false, 00:08:09.545 "nvme_io": false, 00:08:09.545 "nvme_io_md": false, 00:08:09.545 "write_zeroes": true, 00:08:09.545 "zcopy": false, 00:08:09.545 "get_zone_info": false, 00:08:09.545 "zone_management": false, 00:08:09.545 "zone_append": false, 00:08:09.545 "compare": false, 00:08:09.545 "compare_and_write": false, 00:08:09.545 "abort": false, 00:08:09.545 "seek_hole": true, 00:08:09.545 "seek_data": true, 00:08:09.545 "copy": false, 00:08:09.545 "nvme_iov_md": false 00:08:09.545 }, 00:08:09.545 "driver_specific": { 00:08:09.545 "lvol": { 00:08:09.545 "lvol_store_uuid": "14eb72fb-f03b-4b35-b1bb-052861e0324f", 00:08:09.545 "base_bdev": "aio_bdev", 00:08:09.545 "thin_provision": false, 00:08:09.545 "num_allocated_clusters": 38, 00:08:09.545 "snapshot": false, 00:08:09.545 "clone": false, 00:08:09.545 "esnap_clone": false 00:08:09.545 } 00:08:09.545 } 00:08:09.545 } 00:08:09.545 ] 00:08:09.545 03:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:09.545 03:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14eb72fb-f03b-4b35-b1bb-052861e0324f 00:08:09.545 03:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:09.804 03:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:09.804 03:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:09.804 03:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14eb72fb-f03b-4b35-b1bb-052861e0324f 00:08:10.064 03:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:10.064 03:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 24445aee-0d45-46a2-8271-d460a3924900 00:08:10.324 03:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 14eb72fb-f03b-4b35-b1bb-052861e0324f 00:08:10.583 03:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:10.843 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:11.410 ************************************ 00:08:11.410 END TEST lvs_grow_dirty 00:08:11.410 ************************************ 00:08:11.410 00:08:11.410 real 0m21.047s 00:08:11.410 user 0m43.442s 00:08:11.410 sys 0m8.676s 00:08:11.410 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.410 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:11.410 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:11.410 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:11.410 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:11.410 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:11.410 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:11.410 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:11.410 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:11.410 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:11.410 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:11.410 nvmf_trace.0 00:08:11.410 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:11.410 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:11.410 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:11.410 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:11.669 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:11.669 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:11.669 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:11.669 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:11.669 rmmod nvme_tcp 00:08:11.669 rmmod nvme_fabrics 00:08:11.669 rmmod nvme_keyring 00:08:11.669 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:11.669 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:11.669 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:11.669 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 63826 ']' 00:08:11.669 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 63826 00:08:11.669 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 63826 ']' 00:08:11.669 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 63826 00:08:11.669 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:11.669 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:11.669 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63826 00:08:11.928 killing process with pid 63826 00:08:11.928 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:11.928 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:11.928 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63826' 00:08:11.928 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 63826 00:08:11.928 03:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 63826 00:08:12.188 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:12.188 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:12.188 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:12.188 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:12.188 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:08:12.188 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:12.188 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:08:12.188 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:12.188 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:12.188 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:12.188 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:12.188 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:12.188 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:12.188 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:12.188 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:12.188 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:12.188 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:12.188 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:12.188 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:12.188 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:12.188 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:12.188 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:12.447 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:12.447 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.447 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.447 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.447 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:08:12.447 ************************************ 00:08:12.447 END TEST nvmf_lvs_grow 00:08:12.447 ************************************ 00:08:12.447 00:08:12.447 real 0m42.839s 00:08:12.447 user 1m7.648s 00:08:12.447 sys 0m12.386s 00:08:12.447 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:12.447 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:12.447 03:10:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:12.447 03:10:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:12.447 03:10:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:12.448 03:10:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:12.448 ************************************ 00:08:12.448 START TEST nvmf_bdev_io_wait 00:08:12.448 ************************************ 00:08:12.448 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:12.448 * Looking for test storage... 00:08:12.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:12.448 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:12.448 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:08:12.448 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:12.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.708 --rc genhtml_branch_coverage=1 00:08:12.708 --rc genhtml_function_coverage=1 00:08:12.708 --rc genhtml_legend=1 00:08:12.708 --rc geninfo_all_blocks=1 00:08:12.708 --rc geninfo_unexecuted_blocks=1 00:08:12.708 00:08:12.708 ' 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:12.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.708 --rc genhtml_branch_coverage=1 00:08:12.708 --rc genhtml_function_coverage=1 00:08:12.708 --rc genhtml_legend=1 00:08:12.708 --rc geninfo_all_blocks=1 00:08:12.708 --rc geninfo_unexecuted_blocks=1 00:08:12.708 00:08:12.708 ' 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:12.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.708 --rc genhtml_branch_coverage=1 00:08:12.708 --rc genhtml_function_coverage=1 00:08:12.708 --rc genhtml_legend=1 00:08:12.708 --rc geninfo_all_blocks=1 00:08:12.708 --rc geninfo_unexecuted_blocks=1 00:08:12.708 00:08:12.708 ' 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:12.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.708 --rc genhtml_branch_coverage=1 00:08:12.708 --rc genhtml_function_coverage=1 00:08:12.708 --rc genhtml_legend=1 00:08:12.708 --rc geninfo_all_blocks=1 00:08:12.708 --rc geninfo_unexecuted_blocks=1 00:08:12.708 00:08:12.708 ' 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.708 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:12.709 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # nvmf_veth_init 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:12.709 Cannot find device "nvmf_init_br" 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:12.709 Cannot find device "nvmf_init_br2" 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:12.709 Cannot find device "nvmf_tgt_br" 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:12.709 Cannot find device "nvmf_tgt_br2" 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:12.709 Cannot find device "nvmf_init_br" 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:12.709 Cannot find device "nvmf_init_br2" 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:12.709 Cannot find device "nvmf_tgt_br" 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:12.709 Cannot find device "nvmf_tgt_br2" 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:12.709 Cannot find device "nvmf_br" 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:12.709 Cannot find device "nvmf_init_if" 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:12.709 Cannot find device "nvmf_init_if2" 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:12.709 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:12.709 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:12.709 03:10:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:12.709 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:12.968 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:12.968 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:12.968 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:12.968 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:12.968 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:12.968 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:12.968 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:12.968 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:12.968 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:12.968 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:12.968 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:12.968 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:12.968 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:12.968 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:12.968 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:12.968 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:12.968 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:12.968 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:12.968 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:12.968 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:12.968 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:12.968 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:12.968 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:12.968 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:12.968 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:12.968 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:12.968 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:12.968 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:12.968 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:08:12.968 00:08:12.968 --- 10.0.0.3 ping statistics --- 00:08:12.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.968 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:08:12.968 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:12.968 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:12.968 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:08:12.968 00:08:12.968 --- 10.0.0.4 ping statistics --- 00:08:12.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.968 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:08:12.969 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:12.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:12.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:08:12.969 00:08:12.969 --- 10.0.0.1 ping statistics --- 00:08:12.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.969 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:08:12.969 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:12.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:12.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:08:12.969 00:08:12.969 --- 10.0.0.2 ping statistics --- 00:08:12.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.969 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:08:12.969 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:12.969 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # return 0 00:08:12.969 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:12.969 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:12.969 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:12.969 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:12.969 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:12.969 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:12.969 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:12.969 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:12.969 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:12.969 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:12.969 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:12.969 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=64200 00:08:12.969 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:12.969 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 64200 00:08:12.969 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 64200 ']' 00:08:12.969 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.969 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:12.969 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.969 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:12.969 03:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:13.228 [2024-10-09 03:10:56.281331] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:08:13.228 [2024-10-09 03:10:56.281617] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.228 [2024-10-09 03:10:56.423595] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:13.487 [2024-10-09 03:10:56.557566] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:13.487 [2024-10-09 03:10:56.557875] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:13.487 [2024-10-09 03:10:56.558033] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:13.487 [2024-10-09 03:10:56.558341] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:13.487 [2024-10-09 03:10:56.558461] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:13.487 [2024-10-09 03:10:56.560024] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.487 [2024-10-09 03:10:56.560233] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.487 [2024-10-09 03:10:56.560311] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:08:13.487 [2024-10-09 03:10:56.560311] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.056 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:14.056 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:14.056 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:14.056 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:14.056 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.315 [2024-10-09 03:10:57.453797] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.315 [2024-10-09 03:10:57.468019] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.315 Malloc0 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.315 [2024-10-09 03:10:57.540715] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64235 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64237 00:08:14.315 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:14.315 { 00:08:14.315 "params": { 00:08:14.315 "name": "Nvme$subsystem", 00:08:14.315 "trtype": "$TEST_TRANSPORT", 00:08:14.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:14.315 "adrfam": "ipv4", 00:08:14.315 "trsvcid": "$NVMF_PORT", 00:08:14.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:14.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:14.315 "hdgst": ${hdgst:-false}, 00:08:14.315 "ddgst": ${ddgst:-false} 00:08:14.315 }, 00:08:14.315 "method": "bdev_nvme_attach_controller" 00:08:14.315 } 00:08:14.315 EOF 00:08:14.315 )") 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64240 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:14.316 { 00:08:14.316 "params": { 00:08:14.316 "name": "Nvme$subsystem", 00:08:14.316 "trtype": "$TEST_TRANSPORT", 00:08:14.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:14.316 "adrfam": "ipv4", 00:08:14.316 "trsvcid": "$NVMF_PORT", 00:08:14.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:14.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:14.316 "hdgst": ${hdgst:-false}, 00:08:14.316 "ddgst": ${ddgst:-false} 00:08:14.316 }, 00:08:14.316 "method": "bdev_nvme_attach_controller" 00:08:14.316 } 00:08:14.316 EOF 00:08:14.316 )") 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64244 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:14.316 { 00:08:14.316 "params": { 00:08:14.316 "name": "Nvme$subsystem", 00:08:14.316 "trtype": "$TEST_TRANSPORT", 00:08:14.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:14.316 "adrfam": "ipv4", 00:08:14.316 "trsvcid": "$NVMF_PORT", 00:08:14.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:14.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:14.316 "hdgst": ${hdgst:-false}, 00:08:14.316 "ddgst": ${ddgst:-false} 00:08:14.316 }, 00:08:14.316 "method": "bdev_nvme_attach_controller" 00:08:14.316 } 00:08:14.316 EOF 00:08:14.316 )") 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:14.316 { 00:08:14.316 "params": { 00:08:14.316 "name": "Nvme$subsystem", 00:08:14.316 "trtype": "$TEST_TRANSPORT", 00:08:14.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:14.316 "adrfam": "ipv4", 00:08:14.316 "trsvcid": "$NVMF_PORT", 00:08:14.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:14.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:14.316 "hdgst": ${hdgst:-false}, 00:08:14.316 "ddgst": ${ddgst:-false} 00:08:14.316 }, 00:08:14.316 "method": "bdev_nvme_attach_controller" 00:08:14.316 } 00:08:14.316 EOF 00:08:14.316 )") 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:14.316 "params": { 00:08:14.316 "name": "Nvme1", 00:08:14.316 "trtype": "tcp", 00:08:14.316 "traddr": "10.0.0.3", 00:08:14.316 "adrfam": "ipv4", 00:08:14.316 "trsvcid": "4420", 00:08:14.316 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:14.316 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:14.316 "hdgst": false, 00:08:14.316 "ddgst": false 00:08:14.316 }, 00:08:14.316 "method": "bdev_nvme_attach_controller" 00:08:14.316 }' 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:14.316 "params": { 00:08:14.316 "name": "Nvme1", 00:08:14.316 "trtype": "tcp", 00:08:14.316 "traddr": "10.0.0.3", 00:08:14.316 "adrfam": "ipv4", 00:08:14.316 "trsvcid": "4420", 00:08:14.316 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:14.316 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:14.316 "hdgst": false, 00:08:14.316 "ddgst": false 00:08:14.316 }, 00:08:14.316 "method": "bdev_nvme_attach_controller" 00:08:14.316 }' 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:14.316 "params": { 00:08:14.316 "name": "Nvme1", 00:08:14.316 "trtype": "tcp", 00:08:14.316 "traddr": "10.0.0.3", 00:08:14.316 "adrfam": "ipv4", 00:08:14.316 "trsvcid": "4420", 00:08:14.316 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:14.316 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:14.316 "hdgst": false, 00:08:14.316 "ddgst": false 00:08:14.316 }, 00:08:14.316 "method": "bdev_nvme_attach_controller" 00:08:14.316 }' 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:14.316 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:14.316 "params": { 00:08:14.316 "name": "Nvme1", 00:08:14.316 "trtype": "tcp", 00:08:14.316 "traddr": "10.0.0.3", 00:08:14.316 "adrfam": "ipv4", 00:08:14.316 "trsvcid": "4420", 00:08:14.316 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:14.316 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:14.316 "hdgst": false, 00:08:14.316 "ddgst": false 00:08:14.316 }, 00:08:14.316 "method": "bdev_nvme_attach_controller" 00:08:14.316 }' 00:08:14.575 [2024-10-09 03:10:57.616640] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:08:14.575 [2024-10-09 03:10:57.617003] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:14.575 [2024-10-09 03:10:57.626919] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:08:14.575 [2024-10-09 03:10:57.626995] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:14.575 03:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64235 00:08:14.575 [2024-10-09 03:10:57.633473] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:08:14.575 [2024-10-09 03:10:57.633800] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:14.575 [2024-10-09 03:10:57.645860] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:08:14.575 [2024-10-09 03:10:57.646105] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:14.575 [2024-10-09 03:10:57.861251] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.834 [2024-10-09 03:10:57.938282] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.834 [2024-10-09 03:10:57.971325] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:08:14.834 [2024-10-09 03:10:58.011935] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.834 [2024-10-09 03:10:58.021711] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:14.834 [2024-10-09 03:10:58.068982] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:08:14.834 [2024-10-09 03:10:58.090238] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.834 [2024-10-09 03:10:58.120900] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:08:14.834 [2024-10-09 03:10:58.131631] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:15.093 Running I/O for 1 seconds... 00:08:15.093 [2024-10-09 03:10:58.171146] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:15.093 [2024-10-09 03:10:58.185506] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:08:15.093 [2024-10-09 03:10:58.285128] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:15.093 Running I/O for 1 seconds... 00:08:15.093 Running I/O for 1 seconds... 00:08:15.352 Running I/O for 1 seconds... 00:08:15.919 6251.00 IOPS, 24.42 MiB/s 00:08:15.919 Latency(us) 00:08:15.919 [2024-10-09T03:10:59.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:15.919 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:15.919 Nvme1n1 : 1.02 6224.46 24.31 0.00 0.00 20237.91 8400.52 42181.35 00:08:15.919 [2024-10-09T03:10:59.222Z] =================================================================================================================== 00:08:15.919 [2024-10-09T03:10:59.222Z] Total : 6224.46 24.31 0.00 0.00 20237.91 8400.52 42181.35 00:08:16.178 5858.00 IOPS, 22.88 MiB/s 00:08:16.178 Latency(us) 00:08:16.178 [2024-10-09T03:10:59.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.178 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:16.178 Nvme1n1 : 1.01 5954.48 23.26 0.00 0.00 21410.40 6970.65 42181.35 00:08:16.178 [2024-10-09T03:10:59.481Z] =================================================================================================================== 00:08:16.178 [2024-10-09T03:10:59.481Z] Total : 5954.48 23.26 0.00 0.00 21410.40 6970.65 42181.35 00:08:16.178 177016.00 IOPS, 691.47 MiB/s 00:08:16.178 Latency(us) 00:08:16.178 [2024-10-09T03:10:59.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.178 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:16.179 Nvme1n1 : 1.00 176624.30 689.94 0.00 0.00 720.83 357.47 2189.50 00:08:16.179 [2024-10-09T03:10:59.482Z] =================================================================================================================== 00:08:16.179 [2024-10-09T03:10:59.482Z] Total : 176624.30 689.94 0.00 0.00 720.83 357.47 2189.50 00:08:16.179 6804.00 IOPS, 26.58 MiB/s 00:08:16.179 Latency(us) 00:08:16.179 [2024-10-09T03:10:59.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.179 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:16.179 Nvme1n1 : 1.01 6850.44 26.76 0.00 0.00 18566.75 9175.04 27167.65 00:08:16.179 [2024-10-09T03:10:59.482Z] =================================================================================================================== 00:08:16.179 [2024-10-09T03:10:59.482Z] Total : 6850.44 26.76 0.00 0.00 18566.75 9175.04 27167.65 00:08:16.437 03:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64237 00:08:16.437 03:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64240 00:08:16.437 03:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64244 00:08:16.437 03:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:16.437 03:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.437 03:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.437 03:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.437 03:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:16.437 03:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:16.437 03:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:16.437 03:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:16.696 03:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:16.696 03:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:16.696 03:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:16.696 03:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:16.696 rmmod nvme_tcp 00:08:16.696 rmmod nvme_fabrics 00:08:16.696 rmmod nvme_keyring 00:08:16.696 03:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:16.696 03:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:16.696 03:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:16.696 03:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 64200 ']' 00:08:16.696 03:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 64200 00:08:16.696 03:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 64200 ']' 00:08:16.696 03:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 64200 00:08:16.696 03:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:16.696 03:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:16.696 03:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64200 00:08:16.696 killing process with pid 64200 00:08:16.696 03:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:16.696 03:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:16.696 03:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64200' 00:08:16.696 03:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 64200 00:08:16.696 03:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 64200 00:08:16.954 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:16.954 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:16.954 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:16.954 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:16.954 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:08:16.954 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:16.954 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:08:16.954 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:16.954 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:16.954 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:16.954 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:16.954 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:16.954 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:16.954 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:16.954 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:16.954 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:16.954 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:16.954 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:16.954 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:17.218 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:17.218 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:17.218 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:17.218 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:17.218 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.218 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.218 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.218 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:08:17.218 00:08:17.218 real 0m4.767s 00:08:17.218 user 0m19.453s 00:08:17.218 sys 0m2.462s 00:08:17.218 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.218 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.218 ************************************ 00:08:17.218 END TEST nvmf_bdev_io_wait 00:08:17.218 ************************************ 00:08:17.218 03:11:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:17.218 03:11:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:17.218 03:11:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.218 03:11:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:17.218 ************************************ 00:08:17.218 START TEST nvmf_queue_depth 00:08:17.218 ************************************ 00:08:17.218 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:17.218 * Looking for test storage... 00:08:17.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:17.218 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:17.218 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:08:17.218 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:17.479 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:17.479 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:17.479 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:17.479 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:17.479 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.479 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:17.479 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:17.479 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:17.479 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:17.479 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:17.479 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:17.479 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:17.479 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:17.479 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:17.479 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:17.479 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.479 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:17.479 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:17.479 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.479 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:17.479 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:17.479 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:17.479 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:17.479 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.479 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:17.479 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:17.479 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:17.479 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:17.479 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:17.479 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:17.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.480 --rc genhtml_branch_coverage=1 00:08:17.480 --rc genhtml_function_coverage=1 00:08:17.480 --rc genhtml_legend=1 00:08:17.480 --rc geninfo_all_blocks=1 00:08:17.480 --rc geninfo_unexecuted_blocks=1 00:08:17.480 00:08:17.480 ' 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:17.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.480 --rc genhtml_branch_coverage=1 00:08:17.480 --rc genhtml_function_coverage=1 00:08:17.480 --rc genhtml_legend=1 00:08:17.480 --rc geninfo_all_blocks=1 00:08:17.480 --rc geninfo_unexecuted_blocks=1 00:08:17.480 00:08:17.480 ' 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:17.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.480 --rc genhtml_branch_coverage=1 00:08:17.480 --rc genhtml_function_coverage=1 00:08:17.480 --rc genhtml_legend=1 00:08:17.480 --rc geninfo_all_blocks=1 00:08:17.480 --rc geninfo_unexecuted_blocks=1 00:08:17.480 00:08:17.480 ' 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:17.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.480 --rc genhtml_branch_coverage=1 00:08:17.480 --rc genhtml_function_coverage=1 00:08:17.480 --rc genhtml_legend=1 00:08:17.480 --rc geninfo_all_blocks=1 00:08:17.480 --rc geninfo_unexecuted_blocks=1 00:08:17.480 00:08:17.480 ' 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:17.480 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@458 -- # nvmf_veth_init 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:17.480 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:17.481 Cannot find device "nvmf_init_br" 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:17.481 Cannot find device "nvmf_init_br2" 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:17.481 Cannot find device "nvmf_tgt_br" 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:17.481 Cannot find device "nvmf_tgt_br2" 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:17.481 Cannot find device "nvmf_init_br" 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:17.481 Cannot find device "nvmf_init_br2" 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:17.481 Cannot find device "nvmf_tgt_br" 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:17.481 Cannot find device "nvmf_tgt_br2" 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:17.481 Cannot find device "nvmf_br" 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:17.481 Cannot find device "nvmf_init_if" 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:17.481 Cannot find device "nvmf_init_if2" 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:17.481 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:17.481 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:17.481 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:17.739 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:17.739 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:17.739 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:17.739 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:17.739 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:17.739 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:17.739 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:17.739 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:17.739 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:17.739 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:17.739 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:17.739 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:17.739 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:17.739 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:17.739 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:17.739 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:17.739 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:17.739 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:17.739 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:17.739 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:17.740 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:17.740 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:17.740 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:17.740 03:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:17.740 03:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:17.740 03:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:17.740 03:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:17.740 03:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:17.740 03:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:17.740 03:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:17.740 03:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:17.740 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:17.740 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.113 ms 00:08:17.740 00:08:17.740 --- 10.0.0.3 ping statistics --- 00:08:17.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.740 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:08:17.740 03:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:17.740 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:17.740 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.088 ms 00:08:17.740 00:08:17.740 --- 10.0.0.4 ping statistics --- 00:08:17.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.740 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:08:17.740 03:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:17.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:17.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:08:17.740 00:08:17.740 --- 10.0.0.1 ping statistics --- 00:08:17.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.740 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:08:17.740 03:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:17.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:17.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:08:17.998 00:08:17.998 --- 10.0.0.2 ping statistics --- 00:08:17.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.998 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:08:17.998 03:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:17.998 03:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # return 0 00:08:17.998 03:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:17.998 03:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:17.998 03:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:17.998 03:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:17.998 03:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:17.998 03:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:17.998 03:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:17.998 03:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:17.998 03:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:17.998 03:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:17.998 03:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.998 03:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=64531 00:08:17.998 03:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:17.998 03:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 64531 00:08:17.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.998 03:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 64531 ']' 00:08:17.998 03:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.998 03:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:17.998 03:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.998 03:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:17.998 03:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.998 [2024-10-09 03:11:01.135143] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:08:17.998 [2024-10-09 03:11:01.135424] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.998 [2024-10-09 03:11:01.282339] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.256 [2024-10-09 03:11:01.434997] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:18.256 [2024-10-09 03:11:01.435424] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:18.256 [2024-10-09 03:11:01.435562] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:18.256 [2024-10-09 03:11:01.435589] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:18.256 [2024-10-09 03:11:01.435600] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:18.256 [2024-10-09 03:11:01.436195] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.256 [2024-10-09 03:11:01.512086] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:18.824 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:18.824 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:18.824 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:18.824 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:18.824 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:18.824 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.824 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:18.824 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.824 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:18.824 [2024-10-09 03:11:02.110601] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.824 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.824 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:18.824 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.824 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:19.083 Malloc0 00:08:19.083 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.083 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:19.083 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.083 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:19.083 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.083 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:19.083 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.083 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:19.083 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.083 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:19.083 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.083 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:19.083 [2024-10-09 03:11:02.178267] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:19.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:19.083 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.083 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64563 00:08:19.083 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:19.083 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64563 /var/tmp/bdevperf.sock 00:08:19.083 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:19.083 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 64563 ']' 00:08:19.083 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:19.083 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:19.083 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:19.083 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:19.083 03:11:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:19.083 [2024-10-09 03:11:02.240313] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:08:19.083 [2024-10-09 03:11:02.240408] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64563 ] 00:08:19.083 [2024-10-09 03:11:02.371817] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.342 [2024-10-09 03:11:02.496570] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.342 [2024-10-09 03:11:02.566583] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.276 03:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:20.276 03:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:20.276 03:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:20.276 03:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.276 03:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:20.276 NVMe0n1 00:08:20.276 03:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.276 03:11:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:20.276 Running I/O for 10 seconds... 00:08:22.587 7682.00 IOPS, 30.01 MiB/s [2024-10-09T03:11:06.823Z] 8166.00 IOPS, 31.90 MiB/s [2024-10-09T03:11:07.758Z] 8216.33 IOPS, 32.10 MiB/s [2024-10-09T03:11:08.693Z] 8474.00 IOPS, 33.10 MiB/s [2024-10-09T03:11:09.630Z] 8610.00 IOPS, 33.63 MiB/s [2024-10-09T03:11:10.572Z] 8722.83 IOPS, 34.07 MiB/s [2024-10-09T03:11:11.524Z] 8805.71 IOPS, 34.40 MiB/s [2024-10-09T03:11:12.458Z] 8852.75 IOPS, 34.58 MiB/s [2024-10-09T03:11:13.837Z] 8889.56 IOPS, 34.72 MiB/s [2024-10-09T03:11:13.837Z] 8916.20 IOPS, 34.83 MiB/s 00:08:30.534 Latency(us) 00:08:30.534 [2024-10-09T03:11:13.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.534 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:30.534 Verification LBA range: start 0x0 length 0x4000 00:08:30.534 NVMe0n1 : 10.09 8931.23 34.89 0.00 0.00 114144.39 23116.33 87699.08 00:08:30.534 [2024-10-09T03:11:13.837Z] =================================================================================================================== 00:08:30.534 [2024-10-09T03:11:13.837Z] Total : 8931.23 34.89 0.00 0.00 114144.39 23116.33 87699.08 00:08:30.534 { 00:08:30.534 "results": [ 00:08:30.534 { 00:08:30.534 "job": "NVMe0n1", 00:08:30.534 "core_mask": "0x1", 00:08:30.534 "workload": "verify", 00:08:30.534 "status": "finished", 00:08:30.534 "verify_range": { 00:08:30.534 "start": 0, 00:08:30.534 "length": 16384 00:08:30.534 }, 00:08:30.534 "queue_depth": 1024, 00:08:30.534 "io_size": 4096, 00:08:30.534 "runtime": 10.088647, 00:08:30.534 "iops": 8931.227348920029, 00:08:30.534 "mibps": 34.88760683171886, 00:08:30.534 "io_failed": 0, 00:08:30.534 "io_timeout": 0, 00:08:30.534 "avg_latency_us": 114144.38942506841, 00:08:30.534 "min_latency_us": 23116.334545454545, 00:08:30.534 "max_latency_us": 87699.08363636363 00:08:30.534 } 00:08:30.534 ], 00:08:30.534 "core_count": 1 00:08:30.534 } 00:08:30.534 03:11:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64563 00:08:30.534 03:11:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 64563 ']' 00:08:30.534 03:11:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 64563 00:08:30.534 03:11:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:30.534 03:11:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:30.534 03:11:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64563 00:08:30.534 killing process with pid 64563 00:08:30.534 Received shutdown signal, test time was about 10.000000 seconds 00:08:30.534 00:08:30.534 Latency(us) 00:08:30.534 [2024-10-09T03:11:13.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.534 [2024-10-09T03:11:13.837Z] =================================================================================================================== 00:08:30.534 [2024-10-09T03:11:13.837Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:30.534 03:11:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:30.534 03:11:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:30.534 03:11:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64563' 00:08:30.534 03:11:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 64563 00:08:30.534 03:11:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 64563 00:08:30.793 03:11:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:30.793 03:11:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:30.793 03:11:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:30.793 03:11:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:30.793 03:11:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:30.793 03:11:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:30.793 03:11:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:30.793 03:11:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:30.793 rmmod nvme_tcp 00:08:30.793 rmmod nvme_fabrics 00:08:30.793 rmmod nvme_keyring 00:08:30.793 03:11:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:30.793 03:11:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:30.793 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:30.793 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 64531 ']' 00:08:30.793 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 64531 00:08:30.793 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 64531 ']' 00:08:30.793 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 64531 00:08:30.793 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:30.793 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:30.793 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64531 00:08:30.793 killing process with pid 64531 00:08:30.793 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:30.793 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:30.793 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64531' 00:08:30.793 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 64531 00:08:30.793 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 64531 00:08:31.052 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:31.052 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:31.052 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:31.052 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:31.052 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:08:31.052 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:31.052 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:08:31.052 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:31.052 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:31.052 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:31.052 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:31.052 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:31.052 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:31.052 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:31.311 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:31.311 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:31.311 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:31.311 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:31.311 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:31.311 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:31.311 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:31.311 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:31.311 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:31.311 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.311 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:31.311 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.311 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:08:31.311 00:08:31.311 real 0m14.085s 00:08:31.311 user 0m23.607s 00:08:31.311 sys 0m2.627s 00:08:31.311 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.311 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.311 ************************************ 00:08:31.311 END TEST nvmf_queue_depth 00:08:31.311 ************************************ 00:08:31.311 03:11:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:31.311 03:11:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:31.311 03:11:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.311 03:11:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:31.311 ************************************ 00:08:31.311 START TEST nvmf_target_multipath 00:08:31.311 ************************************ 00:08:31.311 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:31.570 * Looking for test storage... 00:08:31.570 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:31.570 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:31.570 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:31.570 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:31.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.571 --rc genhtml_branch_coverage=1 00:08:31.571 --rc genhtml_function_coverage=1 00:08:31.571 --rc genhtml_legend=1 00:08:31.571 --rc geninfo_all_blocks=1 00:08:31.571 --rc geninfo_unexecuted_blocks=1 00:08:31.571 00:08:31.571 ' 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:31.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.571 --rc genhtml_branch_coverage=1 00:08:31.571 --rc genhtml_function_coverage=1 00:08:31.571 --rc genhtml_legend=1 00:08:31.571 --rc geninfo_all_blocks=1 00:08:31.571 --rc geninfo_unexecuted_blocks=1 00:08:31.571 00:08:31.571 ' 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:31.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.571 --rc genhtml_branch_coverage=1 00:08:31.571 --rc genhtml_function_coverage=1 00:08:31.571 --rc genhtml_legend=1 00:08:31.571 --rc geninfo_all_blocks=1 00:08:31.571 --rc geninfo_unexecuted_blocks=1 00:08:31.571 00:08:31.571 ' 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:31.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.571 --rc genhtml_branch_coverage=1 00:08:31.571 --rc genhtml_function_coverage=1 00:08:31.571 --rc genhtml_legend=1 00:08:31.571 --rc geninfo_all_blocks=1 00:08:31.571 --rc geninfo_unexecuted_blocks=1 00:08:31.571 00:08:31.571 ' 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.571 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:31.572 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@458 -- # nvmf_veth_init 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:31.572 Cannot find device "nvmf_init_br" 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:31.572 Cannot find device "nvmf_init_br2" 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:31.572 Cannot find device "nvmf_tgt_br" 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:31.572 Cannot find device "nvmf_tgt_br2" 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:31.572 Cannot find device "nvmf_init_br" 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:31.572 Cannot find device "nvmf_init_br2" 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:31.572 Cannot find device "nvmf_tgt_br" 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:31.572 Cannot find device "nvmf_tgt_br2" 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:08:31.572 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:31.831 Cannot find device "nvmf_br" 00:08:31.831 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:08:31.831 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:31.831 Cannot find device "nvmf_init_if" 00:08:31.831 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:08:31.831 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:31.831 Cannot find device "nvmf_init_if2" 00:08:31.831 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:08:31.831 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:31.831 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:31.831 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:08:31.831 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:31.831 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:31.831 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:08:31.831 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:31.831 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:31.831 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:31.831 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:31.831 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:31.831 03:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:31.831 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:31.831 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:31.831 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:31.831 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:31.831 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:31.831 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:31.831 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:31.831 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:31.831 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:31.831 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:31.831 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:31.831 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:31.831 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:31.831 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:31.831 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:31.831 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:31.831 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:31.831 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:31.831 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:32.090 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:32.090 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:32.090 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:32.090 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:32.090 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:32.090 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:32.090 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:32.090 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:32.090 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:32.090 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:08:32.090 00:08:32.090 --- 10.0.0.3 ping statistics --- 00:08:32.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.090 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:08:32.090 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:32.090 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:32.090 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:08:32.090 00:08:32.090 --- 10.0.0.4 ping statistics --- 00:08:32.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.090 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:08:32.091 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:32.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:32.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:08:32.091 00:08:32.091 --- 10.0.0.1 ping statistics --- 00:08:32.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.091 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:08:32.091 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:32.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:32.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:08:32.091 00:08:32.091 --- 10.0.0.2 ping statistics --- 00:08:32.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.091 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:08:32.091 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:32.091 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # return 0 00:08:32.091 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:32.091 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.091 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:32.091 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:32.091 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.091 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:32.091 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:32.091 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:08:32.091 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:32.091 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:32.091 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:32.091 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:32.091 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:32.091 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # nvmfpid=64941 00:08:32.091 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # waitforlisten 64941 00:08:32.091 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:32.091 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 64941 ']' 00:08:32.091 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.091 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:32.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.091 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.091 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:32.091 03:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:32.091 [2024-10-09 03:11:15.267606] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:08:32.091 [2024-10-09 03:11:15.267709] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.349 [2024-10-09 03:11:15.409083] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:32.349 [2024-10-09 03:11:15.553437] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.349 [2024-10-09 03:11:15.553500] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.349 [2024-10-09 03:11:15.553513] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.349 [2024-10-09 03:11:15.553524] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.349 [2024-10-09 03:11:15.553534] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.349 [2024-10-09 03:11:15.555102] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.349 [2024-10-09 03:11:15.555245] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.349 [2024-10-09 03:11:15.555381] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.349 [2024-10-09 03:11:15.555391] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.349 [2024-10-09 03:11:15.632764] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:33.284 03:11:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:33.284 03:11:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:08:33.284 03:11:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:33.284 03:11:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:33.284 03:11:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:33.284 03:11:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.284 03:11:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:33.543 [2024-10-09 03:11:16.631937] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.543 03:11:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:33.802 Malloc0 00:08:33.802 03:11:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:34.061 03:11:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:34.320 03:11:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:34.579 [2024-10-09 03:11:17.647711] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:34.579 03:11:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:08:34.838 [2024-10-09 03:11:17.935963] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:08:34.838 03:11:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid=cb2c30f2-294c-46db-807f-ce0b3b357918 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:34.838 03:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid=cb2c30f2-294c-46db-807f-ce0b3b357918 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:08:35.097 03:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:35.097 03:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:08:35.097 03:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:35.097 03:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:35.097 03:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:08:37.027 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:37.028 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:37.028 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:37.028 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:37.028 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:37.028 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:08:37.028 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:37.028 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:37.028 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:37.028 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:37.028 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:37.028 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:37.028 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:37.028 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:37.028 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:37.028 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:37.028 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:37.028 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:37.028 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:37.028 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:37.028 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:37.028 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:37.028 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:37.028 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:37.028 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:37.028 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:37.028 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:37.028 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:37.028 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:37.028 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:37.028 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:37.028 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:37.028 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=65030 00:08:37.028 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:37.028 03:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:37.028 [global] 00:08:37.028 thread=1 00:08:37.028 invalidate=1 00:08:37.028 rw=randrw 00:08:37.028 time_based=1 00:08:37.028 runtime=6 00:08:37.028 ioengine=libaio 00:08:37.028 direct=1 00:08:37.028 bs=4096 00:08:37.028 iodepth=128 00:08:37.028 norandommap=0 00:08:37.028 numjobs=1 00:08:37.028 00:08:37.028 verify_dump=1 00:08:37.028 verify_backlog=512 00:08:37.028 verify_state_save=0 00:08:37.028 do_verify=1 00:08:37.028 verify=crc32c-intel 00:08:37.028 [job0] 00:08:37.028 filename=/dev/nvme0n1 00:08:37.028 Could not set queue depth (nvme0n1) 00:08:37.286 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:37.286 fio-3.35 00:08:37.286 Starting 1 thread 00:08:38.224 03:11:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:38.483 03:11:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:38.741 03:11:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:38.741 03:11:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:38.741 03:11:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:38.741 03:11:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:38.741 03:11:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:38.741 03:11:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:38.741 03:11:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:38.741 03:11:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:38.741 03:11:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:38.741 03:11:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:38.741 03:11:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:38.741 03:11:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:38.741 03:11:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:38.998 03:11:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:39.255 03:11:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:39.255 03:11:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:39.255 03:11:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:39.256 03:11:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:39.256 03:11:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:39.256 03:11:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:39.256 03:11:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:39.256 03:11:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:39.256 03:11:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:39.256 03:11:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:39.256 03:11:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:39.256 03:11:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:39.256 03:11:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 65030 00:08:43.444 00:08:43.444 job0: (groupid=0, jobs=1): err= 0: pid=65057: Wed Oct 9 03:11:26 2024 00:08:43.444 read: IOPS=9928, BW=38.8MiB/s (40.7MB/s)(233MiB/6006msec) 00:08:43.444 slat (usec): min=2, max=8423, avg=59.29, stdev=245.86 00:08:43.444 clat (usec): min=1671, max=17345, avg=8736.10, stdev=1609.40 00:08:43.444 lat (usec): min=1702, max=17355, avg=8795.39, stdev=1615.28 00:08:43.444 clat percentiles (usec): 00:08:43.444 | 1.00th=[ 4490], 5.00th=[ 6456], 10.00th=[ 7308], 20.00th=[ 7832], 00:08:43.444 | 30.00th=[ 8094], 40.00th=[ 8291], 50.00th=[ 8586], 60.00th=[ 8848], 00:08:43.444 | 70.00th=[ 9110], 80.00th=[ 9503], 90.00th=[10421], 95.00th=[12256], 00:08:43.444 | 99.00th=[13829], 99.50th=[14353], 99.90th=[15401], 99.95th=[15926], 00:08:43.444 | 99.99th=[17433] 00:08:43.444 bw ( KiB/s): min=11744, max=25744, per=51.69%, avg=20529.45, stdev=4302.35, samples=11 00:08:43.444 iops : min= 2936, max= 6436, avg=5132.36, stdev=1075.59, samples=11 00:08:43.444 write: IOPS=5754, BW=22.5MiB/s (23.6MB/s)(123MiB/5469msec); 0 zone resets 00:08:43.444 slat (usec): min=4, max=5332, avg=68.15, stdev=171.11 00:08:43.444 clat (usec): min=2461, max=17420, avg=7655.90, stdev=1402.79 00:08:43.444 lat (usec): min=2485, max=17454, avg=7724.05, stdev=1408.86 00:08:43.444 clat percentiles (usec): 00:08:43.444 | 1.00th=[ 3490], 5.00th=[ 4555], 10.00th=[ 5997], 20.00th=[ 7046], 00:08:43.444 | 30.00th=[ 7373], 40.00th=[ 7570], 50.00th=[ 7832], 60.00th=[ 8029], 00:08:43.444 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[ 8848], 95.00th=[ 9372], 00:08:43.444 | 99.00th=[11731], 99.50th=[12649], 99.90th=[14484], 99.95th=[16450], 00:08:43.444 | 99.99th=[17171] 00:08:43.444 bw ( KiB/s): min=12200, max=25728, per=89.52%, avg=20605.82, stdev=4087.42, samples=11 00:08:43.444 iops : min= 3050, max= 6432, avg=5151.45, stdev=1021.85, samples=11 00:08:43.444 lat (msec) : 2=0.02%, 4=1.22%, 10=89.49%, 20=9.27% 00:08:43.444 cpu : usr=5.68%, sys=20.50%, ctx=5182, majf=0, minf=90 00:08:43.444 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:43.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.444 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:43.444 issued rwts: total=59630,31472,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:43.444 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:43.444 00:08:43.444 Run status group 0 (all jobs): 00:08:43.444 READ: bw=38.8MiB/s (40.7MB/s), 38.8MiB/s-38.8MiB/s (40.7MB/s-40.7MB/s), io=233MiB (244MB), run=6006-6006msec 00:08:43.444 WRITE: bw=22.5MiB/s (23.6MB/s), 22.5MiB/s-22.5MiB/s (23.6MB/s-23.6MB/s), io=123MiB (129MB), run=5469-5469msec 00:08:43.444 00:08:43.444 Disk stats (read/write): 00:08:43.444 nvme0n1: ios=58929/30610, merge=0/0, ticks=493572/219844, in_queue=713416, util=98.68% 00:08:43.444 03:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:08:43.703 03:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:08:43.962 03:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:08:43.962 03:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:43.962 03:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:43.962 03:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:43.962 03:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:43.962 03:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:43.962 03:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:08:43.962 03:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:43.962 03:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:43.962 03:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:43.962 03:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:43.962 03:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:43.962 03:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:08:43.962 03:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=65136 00:08:43.962 03:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:43.962 03:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:08:43.962 [global] 00:08:43.962 thread=1 00:08:43.962 invalidate=1 00:08:43.962 rw=randrw 00:08:43.962 time_based=1 00:08:43.962 runtime=6 00:08:43.962 ioengine=libaio 00:08:43.962 direct=1 00:08:43.962 bs=4096 00:08:43.962 iodepth=128 00:08:43.962 norandommap=0 00:08:43.962 numjobs=1 00:08:43.962 00:08:43.962 verify_dump=1 00:08:43.962 verify_backlog=512 00:08:43.962 verify_state_save=0 00:08:43.962 do_verify=1 00:08:43.962 verify=crc32c-intel 00:08:43.962 [job0] 00:08:43.962 filename=/dev/nvme0n1 00:08:44.221 Could not set queue depth (nvme0n1) 00:08:44.221 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:44.221 fio-3.35 00:08:44.221 Starting 1 thread 00:08:45.167 03:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:45.426 03:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:45.684 03:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:08:45.684 03:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:45.684 03:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:45.684 03:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:45.684 03:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:45.684 03:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:45.684 03:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:08:45.684 03:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:45.684 03:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:45.684 03:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:45.684 03:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:45.684 03:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:45.685 03:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:45.943 03:11:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:46.202 03:11:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:08:46.202 03:11:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:46.202 03:11:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:46.202 03:11:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:46.202 03:11:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:46.202 03:11:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:46.202 03:11:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:08:46.202 03:11:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:46.202 03:11:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:46.202 03:11:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:46.202 03:11:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:46.202 03:11:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:46.202 03:11:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 65136 00:08:50.393 00:08:50.393 job0: (groupid=0, jobs=1): err= 0: pid=65158: Wed Oct 9 03:11:33 2024 00:08:50.393 read: IOPS=11.0k, BW=42.9MiB/s (45.0MB/s)(258MiB/6006msec) 00:08:50.393 slat (usec): min=5, max=8306, avg=44.14, stdev=196.30 00:08:50.393 clat (usec): min=228, max=20064, avg=7911.33, stdev=2677.05 00:08:50.393 lat (usec): min=245, max=20073, avg=7955.47, stdev=2687.81 00:08:50.393 clat percentiles (usec): 00:08:50.393 | 1.00th=[ 971], 5.00th=[ 1844], 10.00th=[ 4113], 20.00th=[ 6456], 00:08:50.393 | 30.00th=[ 7504], 40.00th=[ 7963], 50.00th=[ 8225], 60.00th=[ 8586], 00:08:50.393 | 70.00th=[ 8979], 80.00th=[ 9372], 90.00th=[10945], 95.00th=[12125], 00:08:50.393 | 99.00th=[13829], 99.50th=[14877], 99.90th=[18220], 99.95th=[18744], 00:08:50.393 | 99.99th=[19792] 00:08:50.393 bw ( KiB/s): min= 2672, max=35280, per=52.81%, avg=23189.82, stdev=9520.61, samples=11 00:08:50.393 iops : min= 668, max= 8820, avg=5797.45, stdev=2380.15, samples=11 00:08:50.393 write: IOPS=6761, BW=26.4MiB/s (27.7MB/s)(138MiB/5244msec); 0 zone resets 00:08:50.393 slat (usec): min=12, max=2222, avg=56.72, stdev=139.98 00:08:50.393 clat (usec): min=252, max=19343, avg=6797.38, stdev=2191.38 00:08:50.393 lat (usec): min=282, max=19402, avg=6854.09, stdev=2203.53 00:08:50.393 clat percentiles (usec): 00:08:50.393 | 1.00th=[ 930], 5.00th=[ 2606], 10.00th=[ 3621], 20.00th=[ 4817], 00:08:50.393 | 30.00th=[ 6456], 40.00th=[ 7046], 50.00th=[ 7373], 60.00th=[ 7635], 00:08:50.393 | 70.00th=[ 7898], 80.00th=[ 8160], 90.00th=[ 8717], 95.00th=[ 9634], 00:08:50.393 | 99.00th=[12125], 99.50th=[12780], 99.90th=[15795], 99.95th=[16909], 00:08:50.393 | 99.99th=[19268] 00:08:50.393 bw ( KiB/s): min= 2728, max=36136, per=85.98%, avg=23253.09, stdev=9493.26, samples=11 00:08:50.393 iops : min= 682, max= 9034, avg=5813.27, stdev=2373.31, samples=11 00:08:50.393 lat (usec) : 250=0.01%, 500=0.14%, 750=0.30%, 1000=0.82% 00:08:50.393 lat (msec) : 2=3.22%, 4=6.40%, 10=78.38%, 20=10.73%, 50=0.01% 00:08:50.393 cpu : usr=5.36%, sys=23.33%, ctx=6504, majf=0, minf=114 00:08:50.394 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:50.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:50.394 issued rwts: total=65929,35455,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.394 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:50.394 00:08:50.394 Run status group 0 (all jobs): 00:08:50.394 READ: bw=42.9MiB/s (45.0MB/s), 42.9MiB/s-42.9MiB/s (45.0MB/s-45.0MB/s), io=258MiB (270MB), run=6006-6006msec 00:08:50.394 WRITE: bw=26.4MiB/s (27.7MB/s), 26.4MiB/s-26.4MiB/s (27.7MB/s-27.7MB/s), io=138MiB (145MB), run=5244-5244msec 00:08:50.394 00:08:50.394 Disk stats (read/write): 00:08:50.394 nvme0n1: ios=65038/34833, merge=0/0, ticks=493356/222091, in_queue=715447, util=98.53% 00:08:50.394 03:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:50.394 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:50.394 03:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:50.394 03:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:08:50.394 03:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:50.394 03:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:50.394 03:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:50.394 03:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:50.394 03:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:08:50.394 03:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:50.653 03:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:08:50.653 03:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:08:50.653 03:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:08:50.653 03:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:08:50.653 03:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:50.653 03:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:50.913 03:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:50.913 03:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:50.913 03:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:50.913 03:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:50.913 rmmod nvme_tcp 00:08:50.913 rmmod nvme_fabrics 00:08:50.913 rmmod nvme_keyring 00:08:50.913 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:50.913 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:50.913 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:50.913 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n 64941 ']' 00:08:50.913 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # killprocess 64941 00:08:50.913 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 64941 ']' 00:08:50.913 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 64941 00:08:50.913 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:08:50.913 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:50.913 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64941 00:08:50.913 killing process with pid 64941 00:08:50.913 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:50.913 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:50.913 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64941' 00:08:50.913 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 64941 00:08:50.913 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 64941 00:08:51.173 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:51.173 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:51.173 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:51.173 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:51.173 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:08:51.173 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:51.173 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:08:51.173 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:51.173 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:51.173 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:51.173 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:51.173 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:51.431 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:51.431 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:51.431 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:51.431 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:51.431 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:51.432 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:51.432 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:51.432 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:51.432 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:51.432 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:51.432 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:51.432 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.432 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.432 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.432 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:08:51.432 00:08:51.432 real 0m20.115s 00:08:51.432 user 1m14.419s 00:08:51.432 sys 0m9.641s 00:08:51.432 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.432 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:51.432 ************************************ 00:08:51.432 END TEST nvmf_target_multipath 00:08:51.432 ************************************ 00:08:51.432 03:11:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:51.432 03:11:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:51.432 03:11:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.432 03:11:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:51.432 ************************************ 00:08:51.432 START TEST nvmf_zcopy 00:08:51.432 ************************************ 00:08:51.432 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:51.691 * Looking for test storage... 00:08:51.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:51.691 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:51.691 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:08:51.691 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:51.691 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:51.691 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:51.691 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:51.691 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:51.691 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.691 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:51.691 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:51.691 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:51.691 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:51.691 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:51.691 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:51.691 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:51.691 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:51.691 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:51.691 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:51.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.692 --rc genhtml_branch_coverage=1 00:08:51.692 --rc genhtml_function_coverage=1 00:08:51.692 --rc genhtml_legend=1 00:08:51.692 --rc geninfo_all_blocks=1 00:08:51.692 --rc geninfo_unexecuted_blocks=1 00:08:51.692 00:08:51.692 ' 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:51.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.692 --rc genhtml_branch_coverage=1 00:08:51.692 --rc genhtml_function_coverage=1 00:08:51.692 --rc genhtml_legend=1 00:08:51.692 --rc geninfo_all_blocks=1 00:08:51.692 --rc geninfo_unexecuted_blocks=1 00:08:51.692 00:08:51.692 ' 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:51.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.692 --rc genhtml_branch_coverage=1 00:08:51.692 --rc genhtml_function_coverage=1 00:08:51.692 --rc genhtml_legend=1 00:08:51.692 --rc geninfo_all_blocks=1 00:08:51.692 --rc geninfo_unexecuted_blocks=1 00:08:51.692 00:08:51.692 ' 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:51.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.692 --rc genhtml_branch_coverage=1 00:08:51.692 --rc genhtml_function_coverage=1 00:08:51.692 --rc genhtml_legend=1 00:08:51.692 --rc geninfo_all_blocks=1 00:08:51.692 --rc geninfo_unexecuted_blocks=1 00:08:51.692 00:08:51.692 ' 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:51.692 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@458 -- # nvmf_veth_init 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:51.692 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:51.693 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:51.693 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:51.693 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:51.693 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:51.693 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:51.693 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:51.693 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:51.693 Cannot find device "nvmf_init_br" 00:08:51.693 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:08:51.693 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:51.693 Cannot find device "nvmf_init_br2" 00:08:51.693 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:08:51.693 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:51.693 Cannot find device "nvmf_tgt_br" 00:08:51.693 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:08:51.693 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:51.693 Cannot find device "nvmf_tgt_br2" 00:08:51.693 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:08:51.693 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:51.952 Cannot find device "nvmf_init_br" 00:08:51.952 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:08:51.952 03:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:51.952 Cannot find device "nvmf_init_br2" 00:08:51.952 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:08:51.952 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:51.952 Cannot find device "nvmf_tgt_br" 00:08:51.952 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:08:51.952 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:51.952 Cannot find device "nvmf_tgt_br2" 00:08:51.952 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:08:51.952 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:51.952 Cannot find device "nvmf_br" 00:08:51.952 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:08:51.952 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:51.952 Cannot find device "nvmf_init_if" 00:08:51.952 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:08:51.952 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:51.952 Cannot find device "nvmf_init_if2" 00:08:51.952 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:08:51.952 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:51.952 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:51.952 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:08:51.952 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:51.952 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:51.952 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:08:51.952 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:51.952 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:51.952 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:51.952 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:51.952 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:51.952 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:51.952 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:51.952 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:51.952 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:51.952 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:51.952 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:51.952 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:51.952 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:51.952 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:51.952 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:52.211 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:52.211 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:08:52.211 00:08:52.211 --- 10.0.0.3 ping statistics --- 00:08:52.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.211 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:52.211 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:52.211 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:08:52.211 00:08:52.211 --- 10.0.0.4 ping statistics --- 00:08:52.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.211 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:52.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:52.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:08:52.211 00:08:52.211 --- 10.0.0.1 ping statistics --- 00:08:52.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.211 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:52.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:52.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:08:52.211 00:08:52.211 --- 10.0.0.2 ping statistics --- 00:08:52.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.211 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # return 0 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=65462 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 65462 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 65462 ']' 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:52.211 03:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.211 [2024-10-09 03:11:35.458407] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:08:52.211 [2024-10-09 03:11:35.458694] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.470 [2024-10-09 03:11:35.593051] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.470 [2024-10-09 03:11:35.711823] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.470 [2024-10-09 03:11:35.711878] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.470 [2024-10-09 03:11:35.711907] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:52.470 [2024-10-09 03:11:35.711915] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:52.470 [2024-10-09 03:11:35.711922] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.470 [2024-10-09 03:11:35.712349] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.470 [2024-10-09 03:11:35.768404] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.407 [2024-10-09 03:11:36.570057] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.407 [2024-10-09 03:11:36.586207] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.407 malloc0 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:53.407 { 00:08:53.407 "params": { 00:08:53.407 "name": "Nvme$subsystem", 00:08:53.407 "trtype": "$TEST_TRANSPORT", 00:08:53.407 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:53.407 "adrfam": "ipv4", 00:08:53.407 "trsvcid": "$NVMF_PORT", 00:08:53.407 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:53.407 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:53.407 "hdgst": ${hdgst:-false}, 00:08:53.407 "ddgst": ${ddgst:-false} 00:08:53.407 }, 00:08:53.407 "method": "bdev_nvme_attach_controller" 00:08:53.407 } 00:08:53.407 EOF 00:08:53.407 )") 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:08:53.407 03:11:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:53.407 "params": { 00:08:53.407 "name": "Nvme1", 00:08:53.407 "trtype": "tcp", 00:08:53.407 "traddr": "10.0.0.3", 00:08:53.407 "adrfam": "ipv4", 00:08:53.407 "trsvcid": "4420", 00:08:53.407 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:53.407 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:53.407 "hdgst": false, 00:08:53.407 "ddgst": false 00:08:53.407 }, 00:08:53.407 "method": "bdev_nvme_attach_controller" 00:08:53.407 }' 00:08:53.407 [2024-10-09 03:11:36.697212] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:08:53.407 [2024-10-09 03:11:36.697305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65496 ] 00:08:53.666 [2024-10-09 03:11:36.837943] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.666 [2024-10-09 03:11:36.944949] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.925 [2024-10-09 03:11:37.015264] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:53.925 Running I/O for 10 seconds... 00:08:56.270 6162.00 IOPS, 48.14 MiB/s [2024-10-09T03:11:40.509Z] 6241.00 IOPS, 48.76 MiB/s [2024-10-09T03:11:41.447Z] 6266.33 IOPS, 48.96 MiB/s [2024-10-09T03:11:42.384Z] 6282.75 IOPS, 49.08 MiB/s [2024-10-09T03:11:43.321Z] 6308.20 IOPS, 49.28 MiB/s [2024-10-09T03:11:44.258Z] 6327.83 IOPS, 49.44 MiB/s [2024-10-09T03:11:45.195Z] 6353.57 IOPS, 49.64 MiB/s [2024-10-09T03:11:46.571Z] 6358.00 IOPS, 49.67 MiB/s [2024-10-09T03:11:47.182Z] 6360.56 IOPS, 49.69 MiB/s [2024-10-09T03:11:47.182Z] 6350.20 IOPS, 49.61 MiB/s 00:09:03.879 Latency(us) 00:09:03.879 [2024-10-09T03:11:47.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.879 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:03.879 Verification LBA range: start 0x0 length 0x1000 00:09:03.879 Nvme1n1 : 10.02 6351.67 49.62 0.00 0.00 20086.96 1630.95 33125.47 00:09:03.879 [2024-10-09T03:11:47.182Z] =================================================================================================================== 00:09:03.879 [2024-10-09T03:11:47.182Z] Total : 6351.67 49.62 0.00 0.00 20086.96 1630.95 33125.47 00:09:04.138 03:11:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65618 00:09:04.139 03:11:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:04.139 03:11:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:04.139 03:11:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:04.139 03:11:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:04.139 03:11:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:09:04.139 03:11:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:09:04.139 03:11:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:04.139 03:11:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:04.139 { 00:09:04.139 "params": { 00:09:04.139 "name": "Nvme$subsystem", 00:09:04.139 "trtype": "$TEST_TRANSPORT", 00:09:04.139 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:04.139 "adrfam": "ipv4", 00:09:04.139 "trsvcid": "$NVMF_PORT", 00:09:04.139 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:04.139 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:04.139 "hdgst": ${hdgst:-false}, 00:09:04.139 "ddgst": ${ddgst:-false} 00:09:04.139 }, 00:09:04.139 "method": "bdev_nvme_attach_controller" 00:09:04.139 } 00:09:04.139 EOF 00:09:04.139 )") 00:09:04.139 03:11:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:09:04.139 [2024-10-09 03:11:47.389206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.139 [2024-10-09 03:11:47.389253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.139 03:11:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:09:04.139 03:11:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:09:04.139 03:11:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:04.139 "params": { 00:09:04.139 "name": "Nvme1", 00:09:04.139 "trtype": "tcp", 00:09:04.139 "traddr": "10.0.0.3", 00:09:04.139 "adrfam": "ipv4", 00:09:04.139 "trsvcid": "4420", 00:09:04.139 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:04.139 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:04.139 "hdgst": false, 00:09:04.139 "ddgst": false 00:09:04.139 }, 00:09:04.139 "method": "bdev_nvme_attach_controller" 00:09:04.139 }' 00:09:04.139 [2024-10-09 03:11:47.397172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.139 [2024-10-09 03:11:47.397204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.139 [2024-10-09 03:11:47.409178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.139 [2024-10-09 03:11:47.409206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.139 [2024-10-09 03:11:47.421164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.139 [2024-10-09 03:11:47.421192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.139 [2024-10-09 03:11:47.433169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.139 [2024-10-09 03:11:47.433197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.398 [2024-10-09 03:11:47.440932] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:09:04.398 [2024-10-09 03:11:47.441028] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65618 ] 00:09:04.398 [2024-10-09 03:11:47.445173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.398 [2024-10-09 03:11:47.445200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.398 [2024-10-09 03:11:47.457175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.398 [2024-10-09 03:11:47.457203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.398 [2024-10-09 03:11:47.469177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.398 [2024-10-09 03:11:47.469205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.398 [2024-10-09 03:11:47.481190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.398 [2024-10-09 03:11:47.481217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.398 [2024-10-09 03:11:47.493193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.398 [2024-10-09 03:11:47.493220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.398 [2024-10-09 03:11:47.505207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.398 [2024-10-09 03:11:47.505237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.398 [2024-10-09 03:11:47.517208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.398 [2024-10-09 03:11:47.517237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.398 [2024-10-09 03:11:47.529210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.398 [2024-10-09 03:11:47.529237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.398 [2024-10-09 03:11:47.541212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.398 [2024-10-09 03:11:47.541239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.398 [2024-10-09 03:11:47.553215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.398 [2024-10-09 03:11:47.553242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.398 [2024-10-09 03:11:47.569223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.398 [2024-10-09 03:11:47.569252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.398 [2024-10-09 03:11:47.581227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.398 [2024-10-09 03:11:47.581256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.398 [2024-10-09 03:11:47.581350] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.398 [2024-10-09 03:11:47.589227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.398 [2024-10-09 03:11:47.589263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.398 [2024-10-09 03:11:47.601255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.398 [2024-10-09 03:11:47.601288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.398 [2024-10-09 03:11:47.609232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.398 [2024-10-09 03:11:47.609262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.398 [2024-10-09 03:11:47.621241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.398 [2024-10-09 03:11:47.621272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.398 [2024-10-09 03:11:47.629236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.398 [2024-10-09 03:11:47.629265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.398 [2024-10-09 03:11:47.637238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.398 [2024-10-09 03:11:47.637266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.398 [2024-10-09 03:11:47.645242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.398 [2024-10-09 03:11:47.645271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.398 [2024-10-09 03:11:47.653242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.398 [2024-10-09 03:11:47.653271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.398 [2024-10-09 03:11:47.661244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.398 [2024-10-09 03:11:47.661272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.398 [2024-10-09 03:11:47.669256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.398 [2024-10-09 03:11:47.669284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.398 [2024-10-09 03:11:47.677247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.398 [2024-10-09 03:11:47.677275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.398 [2024-10-09 03:11:47.683455] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.398 [2024-10-09 03:11:47.685248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.398 [2024-10-09 03:11:47.685273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.398 [2024-10-09 03:11:47.693251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.398 [2024-10-09 03:11:47.693279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.658 [2024-10-09 03:11:47.701255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.658 [2024-10-09 03:11:47.701284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.658 [2024-10-09 03:11:47.709259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.658 [2024-10-09 03:11:47.709288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.658 [2024-10-09 03:11:47.717260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.658 [2024-10-09 03:11:47.717288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.658 [2024-10-09 03:11:47.725260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.658 [2024-10-09 03:11:47.725289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.658 [2024-10-09 03:11:47.733279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.658 [2024-10-09 03:11:47.733307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.658 [2024-10-09 03:11:47.741264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.658 [2024-10-09 03:11:47.741292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.658 [2024-10-09 03:11:47.747746] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:04.658 [2024-10-09 03:11:47.753272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.658 [2024-10-09 03:11:47.753300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.658 [2024-10-09 03:11:47.761273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.658 [2024-10-09 03:11:47.761302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.658 [2024-10-09 03:11:47.769277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.658 [2024-10-09 03:11:47.769307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.658 [2024-10-09 03:11:47.777285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.658 [2024-10-09 03:11:47.777315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.658 [2024-10-09 03:11:47.789291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.658 [2024-10-09 03:11:47.789321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.658 [2024-10-09 03:11:47.797345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.658 [2024-10-09 03:11:47.797413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.658 [2024-10-09 03:11:47.805315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.658 [2024-10-09 03:11:47.805366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.658 [2024-10-09 03:11:47.813332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.658 [2024-10-09 03:11:47.813380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.658 [2024-10-09 03:11:47.821317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.658 [2024-10-09 03:11:47.821539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.658 [2024-10-09 03:11:47.829328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.658 [2024-10-09 03:11:47.829363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.658 [2024-10-09 03:11:47.837327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.658 [2024-10-09 03:11:47.837374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.658 [2024-10-09 03:11:47.845333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.658 [2024-10-09 03:11:47.845380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.658 [2024-10-09 03:11:47.853357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.658 [2024-10-09 03:11:47.853403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.658 [2024-10-09 03:11:47.861369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.658 [2024-10-09 03:11:47.861418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.658 Running I/O for 5 seconds... 00:09:04.658 [2024-10-09 03:11:47.869352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.658 [2024-10-09 03:11:47.869383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.658 [2024-10-09 03:11:47.880956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.658 [2024-10-09 03:11:47.880992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.658 [2024-10-09 03:11:47.890503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.658 [2024-10-09 03:11:47.890539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.658 [2024-10-09 03:11:47.902774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.658 [2024-10-09 03:11:47.902810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.658 [2024-10-09 03:11:47.914158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.658 [2024-10-09 03:11:47.914195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.658 [2024-10-09 03:11:47.930573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.658 [2024-10-09 03:11:47.930609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.658 [2024-10-09 03:11:47.946990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.658 [2024-10-09 03:11:47.947026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.658 [2024-10-09 03:11:47.958958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.658 [2024-10-09 03:11:47.958993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.917 [2024-10-09 03:11:47.975814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.918 [2024-10-09 03:11:47.976014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.918 [2024-10-09 03:11:47.985859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.918 [2024-10-09 03:11:47.985897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.918 [2024-10-09 03:11:48.000524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.918 [2024-10-09 03:11:48.000561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.918 [2024-10-09 03:11:48.009768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.918 [2024-10-09 03:11:48.009804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.918 [2024-10-09 03:11:48.022417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.918 [2024-10-09 03:11:48.022611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.918 [2024-10-09 03:11:48.036790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.918 [2024-10-09 03:11:48.036825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.918 [2024-10-09 03:11:48.046149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.918 [2024-10-09 03:11:48.046183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.918 [2024-10-09 03:11:48.060168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.918 [2024-10-09 03:11:48.060203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.918 [2024-10-09 03:11:48.070374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.918 [2024-10-09 03:11:48.070408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.918 [2024-10-09 03:11:48.081442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.918 [2024-10-09 03:11:48.081600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.918 [2024-10-09 03:11:48.092986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.918 [2024-10-09 03:11:48.093195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.918 [2024-10-09 03:11:48.103905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.918 [2024-10-09 03:11:48.104103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.918 [2024-10-09 03:11:48.114508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.918 [2024-10-09 03:11:48.114676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.918 [2024-10-09 03:11:48.126637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.918 [2024-10-09 03:11:48.126819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.918 [2024-10-09 03:11:48.138805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.918 [2024-10-09 03:11:48.139028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.918 [2024-10-09 03:11:48.150317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.918 [2024-10-09 03:11:48.151300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.918 [2024-10-09 03:11:48.165657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.918 [2024-10-09 03:11:48.165842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.918 [2024-10-09 03:11:48.181306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.918 [2024-10-09 03:11:48.181465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.918 [2024-10-09 03:11:48.190682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.918 [2024-10-09 03:11:48.190856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.918 [2024-10-09 03:11:48.202187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.918 [2024-10-09 03:11:48.202350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.918 [2024-10-09 03:11:48.212931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.918 [2024-10-09 03:11:48.213116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.177 [2024-10-09 03:11:48.224029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.177 [2024-10-09 03:11:48.224256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.177 [2024-10-09 03:11:48.239322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.177 [2024-10-09 03:11:48.239531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.177 [2024-10-09 03:11:48.249100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.177 [2024-10-09 03:11:48.249297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.177 [2024-10-09 03:11:48.264479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.177 [2024-10-09 03:11:48.264654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.177 [2024-10-09 03:11:48.273934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.177 [2024-10-09 03:11:48.274150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.177 [2024-10-09 03:11:48.288778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.177 [2024-10-09 03:11:48.288965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.177 [2024-10-09 03:11:48.305571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.177 [2024-10-09 03:11:48.305770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.177 [2024-10-09 03:11:48.315090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.177 [2024-10-09 03:11:48.315134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.177 [2024-10-09 03:11:48.326427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.177 [2024-10-09 03:11:48.326464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.177 [2024-10-09 03:11:48.339511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.178 [2024-10-09 03:11:48.339689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.178 [2024-10-09 03:11:48.358646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.178 [2024-10-09 03:11:48.358815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.178 [2024-10-09 03:11:48.372965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.178 [2024-10-09 03:11:48.373002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.178 [2024-10-09 03:11:48.382927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.178 [2024-10-09 03:11:48.382961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.178 [2024-10-09 03:11:48.397470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.178 [2024-10-09 03:11:48.397682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.178 [2024-10-09 03:11:48.413638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.178 [2024-10-09 03:11:48.413675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.178 [2024-10-09 03:11:48.423106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.178 [2024-10-09 03:11:48.423151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.178 [2024-10-09 03:11:48.438984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.178 [2024-10-09 03:11:48.439021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.178 [2024-10-09 03:11:48.448642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.178 [2024-10-09 03:11:48.448852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.178 [2024-10-09 03:11:48.463058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.178 [2024-10-09 03:11:48.463276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.178 [2024-10-09 03:11:48.473045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.178 [2024-10-09 03:11:48.473260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.437 [2024-10-09 03:11:48.487707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.437 [2024-10-09 03:11:48.487897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.437 [2024-10-09 03:11:48.504565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.437 [2024-10-09 03:11:48.504759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.437 [2024-10-09 03:11:48.514144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.437 [2024-10-09 03:11:48.514334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.437 [2024-10-09 03:11:48.525432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.437 [2024-10-09 03:11:48.525636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.437 [2024-10-09 03:11:48.537259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.437 [2024-10-09 03:11:48.537465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.437 [2024-10-09 03:11:48.554142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.437 [2024-10-09 03:11:48.554319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.437 [2024-10-09 03:11:48.570063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.437 [2024-10-09 03:11:48.570314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.437 [2024-10-09 03:11:48.579572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.437 [2024-10-09 03:11:48.579762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.437 [2024-10-09 03:11:48.590816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.437 [2024-10-09 03:11:48.591004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.437 [2024-10-09 03:11:48.602680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.437 [2024-10-09 03:11:48.602856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.437 [2024-10-09 03:11:48.620535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.437 [2024-10-09 03:11:48.620726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.437 [2024-10-09 03:11:48.631272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.437 [2024-10-09 03:11:48.631477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.437 [2024-10-09 03:11:48.645934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.437 [2024-10-09 03:11:48.646167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.437 [2024-10-09 03:11:48.655080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.437 [2024-10-09 03:11:48.655282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.437 [2024-10-09 03:11:48.669955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.437 [2024-10-09 03:11:48.670208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.437 [2024-10-09 03:11:48.686907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.437 [2024-10-09 03:11:48.686945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.437 [2024-10-09 03:11:48.697091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.437 [2024-10-09 03:11:48.697308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.437 [2024-10-09 03:11:48.708850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.437 [2024-10-09 03:11:48.709099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.437 [2024-10-09 03:11:48.720125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.437 [2024-10-09 03:11:48.720177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.437 [2024-10-09 03:11:48.738247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.437 [2024-10-09 03:11:48.738283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.696 [2024-10-09 03:11:48.752724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.696 [2024-10-09 03:11:48.752881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.696 [2024-10-09 03:11:48.762729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.696 [2024-10-09 03:11:48.762766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.696 [2024-10-09 03:11:48.774791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.696 [2024-10-09 03:11:48.774828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.696 [2024-10-09 03:11:48.785498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.697 [2024-10-09 03:11:48.785533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.697 [2024-10-09 03:11:48.795770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.697 [2024-10-09 03:11:48.795962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.697 [2024-10-09 03:11:48.806610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.697 [2024-10-09 03:11:48.806805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.697 [2024-10-09 03:11:48.819626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.697 [2024-10-09 03:11:48.819662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.697 [2024-10-09 03:11:48.830922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.697 [2024-10-09 03:11:48.830957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.697 [2024-10-09 03:11:48.846132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.697 [2024-10-09 03:11:48.846167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.697 [2024-10-09 03:11:48.855181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.697 [2024-10-09 03:11:48.855214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.697 11680.00 IOPS, 91.25 MiB/s [2024-10-09T03:11:49.000Z] [2024-10-09 03:11:48.871299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.697 [2024-10-09 03:11:48.871337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.697 [2024-10-09 03:11:48.881657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.697 [2024-10-09 03:11:48.881692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.697 [2024-10-09 03:11:48.897509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.697 [2024-10-09 03:11:48.897543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.697 [2024-10-09 03:11:48.908003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.697 [2024-10-09 03:11:48.908041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.697 [2024-10-09 03:11:48.919042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.697 [2024-10-09 03:11:48.919233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.697 [2024-10-09 03:11:48.930701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.697 [2024-10-09 03:11:48.930849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.697 [2024-10-09 03:11:48.941809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.697 [2024-10-09 03:11:48.941963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.697 [2024-10-09 03:11:48.953259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.697 [2024-10-09 03:11:48.953309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.697 [2024-10-09 03:11:48.966189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.697 [2024-10-09 03:11:48.966254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.697 [2024-10-09 03:11:48.983959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.697 [2024-10-09 03:11:48.983993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.956 [2024-10-09 03:11:49.000231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.956 [2024-10-09 03:11:49.000265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.956 [2024-10-09 03:11:49.017836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.956 [2024-10-09 03:11:49.017888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.956 [2024-10-09 03:11:49.028160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.956 [2024-10-09 03:11:49.028219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.956 [2024-10-09 03:11:49.042126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.956 [2024-10-09 03:11:49.042162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.956 [2024-10-09 03:11:49.052100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.956 [2024-10-09 03:11:49.052149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.956 [2024-10-09 03:11:49.066725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.956 [2024-10-09 03:11:49.066933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.956 [2024-10-09 03:11:49.084172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.956 [2024-10-09 03:11:49.084208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.956 [2024-10-09 03:11:49.094154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.957 [2024-10-09 03:11:49.094191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.957 [2024-10-09 03:11:49.108914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.957 [2024-10-09 03:11:49.108950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.957 [2024-10-09 03:11:49.119035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.957 [2024-10-09 03:11:49.119126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.957 [2024-10-09 03:11:49.132938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.957 [2024-10-09 03:11:49.132974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.957 [2024-10-09 03:11:49.143282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.957 [2024-10-09 03:11:49.143317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.957 [2024-10-09 03:11:49.158936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.957 [2024-10-09 03:11:49.158970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.957 [2024-10-09 03:11:49.169364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.957 [2024-10-09 03:11:49.169414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.957 [2024-10-09 03:11:49.180966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.957 [2024-10-09 03:11:49.181001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.957 [2024-10-09 03:11:49.191936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.957 [2024-10-09 03:11:49.192144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.957 [2024-10-09 03:11:49.206553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.957 [2024-10-09 03:11:49.206588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.957 [2024-10-09 03:11:49.216020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.957 [2024-10-09 03:11:49.216238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.957 [2024-10-09 03:11:49.228582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.957 [2024-10-09 03:11:49.228617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.957 [2024-10-09 03:11:49.244843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.957 [2024-10-09 03:11:49.244879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.216 [2024-10-09 03:11:49.263936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.216 [2024-10-09 03:11:49.264152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.216 [2024-10-09 03:11:49.274574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.216 [2024-10-09 03:11:49.274609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.216 [2024-10-09 03:11:49.289358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.216 [2024-10-09 03:11:49.289410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.216 [2024-10-09 03:11:49.299028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.216 [2024-10-09 03:11:49.299106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.216 [2024-10-09 03:11:49.314544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.216 [2024-10-09 03:11:49.314577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.216 [2024-10-09 03:11:49.325797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.216 [2024-10-09 03:11:49.325834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.216 [2024-10-09 03:11:49.342691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.216 [2024-10-09 03:11:49.342881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.216 [2024-10-09 03:11:49.353001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.216 [2024-10-09 03:11:49.353238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.216 [2024-10-09 03:11:49.363340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.216 [2024-10-09 03:11:49.363552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.216 [2024-10-09 03:11:49.373777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.216 [2024-10-09 03:11:49.373970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.216 [2024-10-09 03:11:49.383715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.216 [2024-10-09 03:11:49.383901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.216 [2024-10-09 03:11:49.398221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.216 [2024-10-09 03:11:49.398411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.216 [2024-10-09 03:11:49.414203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.216 [2024-10-09 03:11:49.414396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.216 [2024-10-09 03:11:49.424187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.216 [2024-10-09 03:11:49.424412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.216 [2024-10-09 03:11:49.435566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.216 [2024-10-09 03:11:49.435761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.216 [2024-10-09 03:11:49.446443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.216 [2024-10-09 03:11:49.446650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.216 [2024-10-09 03:11:49.460910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.216 [2024-10-09 03:11:49.461111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.216 [2024-10-09 03:11:49.470613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.216 [2024-10-09 03:11:49.470803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.216 [2024-10-09 03:11:49.484109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.216 [2024-10-09 03:11:49.484291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.217 [2024-10-09 03:11:49.499681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.217 [2024-10-09 03:11:49.499902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.217 [2024-10-09 03:11:49.509437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.217 [2024-10-09 03:11:49.509626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.476 [2024-10-09 03:11:49.524303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.476 [2024-10-09 03:11:49.524495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.476 [2024-10-09 03:11:49.542134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.476 [2024-10-09 03:11:49.542322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.476 [2024-10-09 03:11:49.557231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.476 [2024-10-09 03:11:49.557267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.476 [2024-10-09 03:11:49.566771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.476 [2024-10-09 03:11:49.566978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.476 [2024-10-09 03:11:49.578910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.476 [2024-10-09 03:11:49.578947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.476 [2024-10-09 03:11:49.589451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.476 [2024-10-09 03:11:49.589644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.476 [2024-10-09 03:11:49.602263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.476 [2024-10-09 03:11:49.602299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.476 [2024-10-09 03:11:49.612396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.476 [2024-10-09 03:11:49.612606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.476 [2024-10-09 03:11:49.626695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.476 [2024-10-09 03:11:49.626730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.476 [2024-10-09 03:11:49.636236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.476 [2024-10-09 03:11:49.636270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.476 [2024-10-09 03:11:49.650718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.476 [2024-10-09 03:11:49.650753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.476 [2024-10-09 03:11:49.660398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.476 [2024-10-09 03:11:49.660448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.476 [2024-10-09 03:11:49.674994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.476 [2024-10-09 03:11:49.675029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.476 [2024-10-09 03:11:49.683877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.476 [2024-10-09 03:11:49.683913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.476 [2024-10-09 03:11:49.700690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.476 [2024-10-09 03:11:49.700728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.476 [2024-10-09 03:11:49.716629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.476 [2024-10-09 03:11:49.716663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.476 [2024-10-09 03:11:49.726150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.476 [2024-10-09 03:11:49.726185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.476 [2024-10-09 03:11:49.739034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.476 [2024-10-09 03:11:49.739088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.476 [2024-10-09 03:11:49.755172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.476 [2024-10-09 03:11:49.755505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.476 [2024-10-09 03:11:49.772865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.476 [2024-10-09 03:11:49.773031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.736 [2024-10-09 03:11:49.784594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.736 [2024-10-09 03:11:49.784713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.736 [2024-10-09 03:11:49.795891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.736 [2024-10-09 03:11:49.795991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.736 [2024-10-09 03:11:49.812228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.736 [2024-10-09 03:11:49.812340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.736 [2024-10-09 03:11:49.828491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.736 [2024-10-09 03:11:49.828663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.736 [2024-10-09 03:11:49.844987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.736 [2024-10-09 03:11:49.845180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.736 [2024-10-09 03:11:49.862162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.736 [2024-10-09 03:11:49.862215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.736 11738.50 IOPS, 91.71 MiB/s [2024-10-09T03:11:50.039Z] [2024-10-09 03:11:49.879501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.736 [2024-10-09 03:11:49.879549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.736 [2024-10-09 03:11:49.889250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.736 [2024-10-09 03:11:49.889299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.736 [2024-10-09 03:11:49.900821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.736 [2024-10-09 03:11:49.900868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.736 [2024-10-09 03:11:49.911819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.736 [2024-10-09 03:11:49.911866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.736 [2024-10-09 03:11:49.929443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.736 [2024-10-09 03:11:49.929489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.736 [2024-10-09 03:11:49.946299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.736 [2024-10-09 03:11:49.946333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.736 [2024-10-09 03:11:49.956863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.736 [2024-10-09 03:11:49.956908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.736 [2024-10-09 03:11:49.969118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.736 [2024-10-09 03:11:49.969163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.736 [2024-10-09 03:11:49.980386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.736 [2024-10-09 03:11:49.980431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.736 [2024-10-09 03:11:49.995811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.736 [2024-10-09 03:11:49.995878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.736 [2024-10-09 03:11:50.005651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.736 [2024-10-09 03:11:50.005683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.736 [2024-10-09 03:11:50.020845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.736 [2024-10-09 03:11:50.020909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.996 [2024-10-09 03:11:50.038130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.996 [2024-10-09 03:11:50.038163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.996 [2024-10-09 03:11:50.053484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.996 [2024-10-09 03:11:50.053515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.996 [2024-10-09 03:11:50.063204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.996 [2024-10-09 03:11:50.063234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.996 [2024-10-09 03:11:50.074664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.996 [2024-10-09 03:11:50.074709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.996 [2024-10-09 03:11:50.085026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.996 [2024-10-09 03:11:50.085083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.996 [2024-10-09 03:11:50.095620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.996 [2024-10-09 03:11:50.095651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.996 [2024-10-09 03:11:50.106383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.996 [2024-10-09 03:11:50.106415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.996 [2024-10-09 03:11:50.118360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.996 [2024-10-09 03:11:50.118406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.996 [2024-10-09 03:11:50.127547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.996 [2024-10-09 03:11:50.127578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.996 [2024-10-09 03:11:50.138560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.996 [2024-10-09 03:11:50.138591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.996 [2024-10-09 03:11:50.150847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.996 [2024-10-09 03:11:50.150878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.996 [2024-10-09 03:11:50.159828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.996 [2024-10-09 03:11:50.159860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.996 [2024-10-09 03:11:50.176576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.996 [2024-10-09 03:11:50.176610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.996 [2024-10-09 03:11:50.193072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.996 [2024-10-09 03:11:50.193129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.996 [2024-10-09 03:11:50.211762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.996 [2024-10-09 03:11:50.211798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.996 [2024-10-09 03:11:50.222614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.996 [2024-10-09 03:11:50.222648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.996 [2024-10-09 03:11:50.233840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.996 [2024-10-09 03:11:50.233874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.996 [2024-10-09 03:11:50.244346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.996 [2024-10-09 03:11:50.244381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.996 [2024-10-09 03:11:50.255626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.996 [2024-10-09 03:11:50.255674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.996 [2024-10-09 03:11:50.265635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.996 [2024-10-09 03:11:50.265683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.996 [2024-10-09 03:11:50.279145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.996 [2024-10-09 03:11:50.279192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.996 [2024-10-09 03:11:50.288639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.996 [2024-10-09 03:11:50.288686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.255 [2024-10-09 03:11:50.302828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.255 [2024-10-09 03:11:50.302876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.255 [2024-10-09 03:11:50.312901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.255 [2024-10-09 03:11:50.312949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.255 [2024-10-09 03:11:50.326875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.255 [2024-10-09 03:11:50.326923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.256 [2024-10-09 03:11:50.342303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.256 [2024-10-09 03:11:50.342350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.256 [2024-10-09 03:11:50.353254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.256 [2024-10-09 03:11:50.353301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.256 [2024-10-09 03:11:50.369144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.256 [2024-10-09 03:11:50.369193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.256 [2024-10-09 03:11:50.386705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.256 [2024-10-09 03:11:50.386738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.256 [2024-10-09 03:11:50.401301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.256 [2024-10-09 03:11:50.401350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.256 [2024-10-09 03:11:50.418060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.256 [2024-10-09 03:11:50.418156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.256 [2024-10-09 03:11:50.434806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.256 [2024-10-09 03:11:50.434854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.256 [2024-10-09 03:11:50.444479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.256 [2024-10-09 03:11:50.444529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.256 [2024-10-09 03:11:50.458918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.256 [2024-10-09 03:11:50.458965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.256 [2024-10-09 03:11:50.468291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.256 [2024-10-09 03:11:50.468338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.256 [2024-10-09 03:11:50.485787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.256 [2024-10-09 03:11:50.485836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.256 [2024-10-09 03:11:50.501315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.256 [2024-10-09 03:11:50.501364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.256 [2024-10-09 03:11:50.511160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.256 [2024-10-09 03:11:50.511193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.256 [2024-10-09 03:11:50.522315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.256 [2024-10-09 03:11:50.522363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.256 [2024-10-09 03:11:50.540014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.256 [2024-10-09 03:11:50.540088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.256 [2024-10-09 03:11:50.556644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.256 [2024-10-09 03:11:50.556693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.515 [2024-10-09 03:11:50.572122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.515 [2024-10-09 03:11:50.572171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.515 [2024-10-09 03:11:50.581558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.515 [2024-10-09 03:11:50.581606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.515 [2024-10-09 03:11:50.597034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.515 [2024-10-09 03:11:50.597091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.515 [2024-10-09 03:11:50.615896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.515 [2024-10-09 03:11:50.615944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.515 [2024-10-09 03:11:50.625796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.515 [2024-10-09 03:11:50.625848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.515 [2024-10-09 03:11:50.640836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.515 [2024-10-09 03:11:50.640884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.515 [2024-10-09 03:11:50.658414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.515 [2024-10-09 03:11:50.658462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.515 [2024-10-09 03:11:50.675309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.515 [2024-10-09 03:11:50.675355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.515 [2024-10-09 03:11:50.684897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.515 [2024-10-09 03:11:50.684945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.515 [2024-10-09 03:11:50.699277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.515 [2024-10-09 03:11:50.699325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.515 [2024-10-09 03:11:50.708208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.515 [2024-10-09 03:11:50.708255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.515 [2024-10-09 03:11:50.720562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.515 [2024-10-09 03:11:50.720593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.515 [2024-10-09 03:11:50.738465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.515 [2024-10-09 03:11:50.738545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.515 [2024-10-09 03:11:50.754463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.515 [2024-10-09 03:11:50.754512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.515 [2024-10-09 03:11:50.770529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.515 [2024-10-09 03:11:50.770576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.515 [2024-10-09 03:11:50.779490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.515 [2024-10-09 03:11:50.779537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.515 [2024-10-09 03:11:50.792102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.515 [2024-10-09 03:11:50.792158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.515 [2024-10-09 03:11:50.801621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.515 [2024-10-09 03:11:50.801668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.515 [2024-10-09 03:11:50.811732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.515 [2024-10-09 03:11:50.811779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.774 [2024-10-09 03:11:50.821813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.775 [2024-10-09 03:11:50.821862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.775 [2024-10-09 03:11:50.831735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.775 [2024-10-09 03:11:50.831783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.775 [2024-10-09 03:11:50.846177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.775 [2024-10-09 03:11:50.846209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.775 [2024-10-09 03:11:50.855867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.775 [2024-10-09 03:11:50.855898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.775 [2024-10-09 03:11:50.866995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.775 [2024-10-09 03:11:50.867026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.775 11866.33 IOPS, 92.71 MiB/s [2024-10-09T03:11:51.078Z] [2024-10-09 03:11:50.878182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.775 [2024-10-09 03:11:50.878231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.775 [2024-10-09 03:11:50.889435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.775 [2024-10-09 03:11:50.889484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.775 [2024-10-09 03:11:50.906110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.775 [2024-10-09 03:11:50.906159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.775 [2024-10-09 03:11:50.916307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.775 [2024-10-09 03:11:50.916356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.775 [2024-10-09 03:11:50.928375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.775 [2024-10-09 03:11:50.928407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.775 [2024-10-09 03:11:50.944154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.775 [2024-10-09 03:11:50.944202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.775 [2024-10-09 03:11:50.962331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.775 [2024-10-09 03:11:50.962381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.775 [2024-10-09 03:11:50.973167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.775 [2024-10-09 03:11:50.973201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.775 [2024-10-09 03:11:50.988823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.775 [2024-10-09 03:11:50.988870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.775 [2024-10-09 03:11:51.004610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.775 [2024-10-09 03:11:51.004657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.775 [2024-10-09 03:11:51.014494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.775 [2024-10-09 03:11:51.014558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.775 [2024-10-09 03:11:51.026345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.775 [2024-10-09 03:11:51.026408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.775 [2024-10-09 03:11:51.037469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.775 [2024-10-09 03:11:51.037519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.775 [2024-10-09 03:11:51.050409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.775 [2024-10-09 03:11:51.050473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.775 [2024-10-09 03:11:51.067163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.775 [2024-10-09 03:11:51.067194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.034 [2024-10-09 03:11:51.084312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.034 [2024-10-09 03:11:51.084360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.034 [2024-10-09 03:11:51.093905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.034 [2024-10-09 03:11:51.093955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.034 [2024-10-09 03:11:51.108140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.034 [2024-10-09 03:11:51.108188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.034 [2024-10-09 03:11:51.118902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.034 [2024-10-09 03:11:51.118950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.034 [2024-10-09 03:11:51.133124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.034 [2024-10-09 03:11:51.133170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.034 [2024-10-09 03:11:51.142828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.034 [2024-10-09 03:11:51.142874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.034 [2024-10-09 03:11:51.153691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.034 [2024-10-09 03:11:51.153747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.034 [2024-10-09 03:11:51.167492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.034 [2024-10-09 03:11:51.167540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.034 [2024-10-09 03:11:51.177249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.034 [2024-10-09 03:11:51.177294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.034 [2024-10-09 03:11:51.189210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.034 [2024-10-09 03:11:51.189243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.034 [2024-10-09 03:11:51.200956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.034 [2024-10-09 03:11:51.201004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.034 [2024-10-09 03:11:51.216921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.034 [2024-10-09 03:11:51.216953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.034 [2024-10-09 03:11:51.234756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.034 [2024-10-09 03:11:51.234787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.034 [2024-10-09 03:11:51.245183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.034 [2024-10-09 03:11:51.245216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.034 [2024-10-09 03:11:51.255522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.034 [2024-10-09 03:11:51.255555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.034 [2024-10-09 03:11:51.266988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.034 [2024-10-09 03:11:51.267019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.034 [2024-10-09 03:11:51.276023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.034 [2024-10-09 03:11:51.276082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.034 [2024-10-09 03:11:51.288472] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.034 [2024-10-09 03:11:51.288503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.034 [2024-10-09 03:11:51.298528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.034 [2024-10-09 03:11:51.298559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.034 [2024-10-09 03:11:51.312245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.034 [2024-10-09 03:11:51.312276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.034 [2024-10-09 03:11:51.321515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.034 [2024-10-09 03:11:51.321548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.293 [2024-10-09 03:11:51.336857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.293 [2024-10-09 03:11:51.336888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.293 [2024-10-09 03:11:51.347106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.293 [2024-10-09 03:11:51.347152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.293 [2024-10-09 03:11:51.361985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.293 [2024-10-09 03:11:51.362033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.293 [2024-10-09 03:11:51.380260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.293 [2024-10-09 03:11:51.380291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.293 [2024-10-09 03:11:51.390186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.293 [2024-10-09 03:11:51.390217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.293 [2024-10-09 03:11:51.405109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.293 [2024-10-09 03:11:51.405152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.293 [2024-10-09 03:11:51.416205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.293 [2024-10-09 03:11:51.416234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.293 [2024-10-09 03:11:51.424703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.293 [2024-10-09 03:11:51.424734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.293 [2024-10-09 03:11:51.435589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.293 [2024-10-09 03:11:51.435622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.293 [2024-10-09 03:11:51.445939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.293 [2024-10-09 03:11:51.445971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.293 [2024-10-09 03:11:51.456583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.293 [2024-10-09 03:11:51.456632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.293 [2024-10-09 03:11:51.467081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.293 [2024-10-09 03:11:51.467126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.293 [2024-10-09 03:11:51.480049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.293 [2024-10-09 03:11:51.480124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.293 [2024-10-09 03:11:51.497134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.293 [2024-10-09 03:11:51.497191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.293 [2024-10-09 03:11:51.513070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.293 [2024-10-09 03:11:51.513129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.293 [2024-10-09 03:11:51.522599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.293 [2024-10-09 03:11:51.522647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.293 [2024-10-09 03:11:51.538167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.293 [2024-10-09 03:11:51.538212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.293 [2024-10-09 03:11:51.554947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.293 [2024-10-09 03:11:51.554994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.293 [2024-10-09 03:11:51.563813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.293 [2024-10-09 03:11:51.563859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.293 [2024-10-09 03:11:51.574999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.293 [2024-10-09 03:11:51.575046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.293 [2024-10-09 03:11:51.586425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.293 [2024-10-09 03:11:51.586473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.552 [2024-10-09 03:11:51.602736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.552 [2024-10-09 03:11:51.602785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.552 [2024-10-09 03:11:51.612481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.552 [2024-10-09 03:11:51.612528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.552 [2024-10-09 03:11:51.626554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.552 [2024-10-09 03:11:51.626602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.552 [2024-10-09 03:11:51.636202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.552 [2024-10-09 03:11:51.636250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.552 [2024-10-09 03:11:51.647843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.552 [2024-10-09 03:11:51.647893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.552 [2024-10-09 03:11:51.658741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.552 [2024-10-09 03:11:51.658774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.552 [2024-10-09 03:11:51.675929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.552 [2024-10-09 03:11:51.675979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.552 [2024-10-09 03:11:51.693182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.552 [2024-10-09 03:11:51.693229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.552 [2024-10-09 03:11:51.702837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.552 [2024-10-09 03:11:51.702884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.552 [2024-10-09 03:11:51.716696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.552 [2024-10-09 03:11:51.716743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.552 [2024-10-09 03:11:51.726576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.552 [2024-10-09 03:11:51.726624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.552 [2024-10-09 03:11:51.738834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.552 [2024-10-09 03:11:51.738880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.552 [2024-10-09 03:11:51.753999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.552 [2024-10-09 03:11:51.754048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.552 [2024-10-09 03:11:51.769699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.552 [2024-10-09 03:11:51.769770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.552 [2024-10-09 03:11:51.780624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.552 [2024-10-09 03:11:51.780671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.552 [2024-10-09 03:11:51.796991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.552 [2024-10-09 03:11:51.797037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.552 [2024-10-09 03:11:51.806803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.552 [2024-10-09 03:11:51.806852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.553 [2024-10-09 03:11:51.821787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.553 [2024-10-09 03:11:51.821832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.553 [2024-10-09 03:11:51.836837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.553 [2024-10-09 03:11:51.836884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.553 [2024-10-09 03:11:51.845657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.553 [2024-10-09 03:11:51.845703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.812 [2024-10-09 03:11:51.860157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.812 [2024-10-09 03:11:51.860204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.812 [2024-10-09 03:11:51.869141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.812 [2024-10-09 03:11:51.869189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.812 11889.50 IOPS, 92.89 MiB/s [2024-10-09T03:11:52.115Z] [2024-10-09 03:11:51.879865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.812 [2024-10-09 03:11:51.879911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.812 [2024-10-09 03:11:51.891443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.812 [2024-10-09 03:11:51.891521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.812 [2024-10-09 03:11:51.900748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.812 [2024-10-09 03:11:51.900825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.812 [2024-10-09 03:11:51.915936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.812 [2024-10-09 03:11:51.915983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.812 [2024-10-09 03:11:51.925490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.812 [2024-10-09 03:11:51.925536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.812 [2024-10-09 03:11:51.939013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.812 [2024-10-09 03:11:51.939088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.812 [2024-10-09 03:11:51.947994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.812 [2024-10-09 03:11:51.948041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.812 [2024-10-09 03:11:51.961716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.812 [2024-10-09 03:11:51.961786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.812 [2024-10-09 03:11:51.977200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.812 [2024-10-09 03:11:51.977248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.812 [2024-10-09 03:11:51.996818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.812 [2024-10-09 03:11:51.996866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.812 [2024-10-09 03:11:52.011826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.812 [2024-10-09 03:11:52.011874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.812 [2024-10-09 03:11:52.021696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.812 [2024-10-09 03:11:52.021767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.812 [2024-10-09 03:11:52.032923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.812 [2024-10-09 03:11:52.032969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.812 [2024-10-09 03:11:52.045012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.812 [2024-10-09 03:11:52.045058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.812 [2024-10-09 03:11:52.054654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.812 [2024-10-09 03:11:52.054686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.812 [2024-10-09 03:11:52.067749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.812 [2024-10-09 03:11:52.067797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.812 [2024-10-09 03:11:52.082959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.812 [2024-10-09 03:11:52.083008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.812 [2024-10-09 03:11:52.092688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.812 [2024-10-09 03:11:52.092735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.812 [2024-10-09 03:11:52.108879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.812 [2024-10-09 03:11:52.108925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.071 [2024-10-09 03:11:52.124916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.071 [2024-10-09 03:11:52.124951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.071 [2024-10-09 03:11:52.143148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.072 [2024-10-09 03:11:52.143182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.072 [2024-10-09 03:11:52.158015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.072 [2024-10-09 03:11:52.158062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.072 [2024-10-09 03:11:52.168007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.072 [2024-10-09 03:11:52.168055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.072 [2024-10-09 03:11:52.182118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.072 [2024-10-09 03:11:52.182168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.072 [2024-10-09 03:11:52.198620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.072 [2024-10-09 03:11:52.198669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.072 [2024-10-09 03:11:52.209129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.072 [2024-10-09 03:11:52.209187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.072 [2024-10-09 03:11:52.225092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.072 [2024-10-09 03:11:52.225134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.072 [2024-10-09 03:11:52.240343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.072 [2024-10-09 03:11:52.240391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.072 [2024-10-09 03:11:52.251226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.072 [2024-10-09 03:11:52.251271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.072 [2024-10-09 03:11:52.261828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.072 [2024-10-09 03:11:52.261877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.072 [2024-10-09 03:11:52.274199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.072 [2024-10-09 03:11:52.274246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.072 [2024-10-09 03:11:52.283855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.072 [2024-10-09 03:11:52.283903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.072 [2024-10-09 03:11:52.296889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.072 [2024-10-09 03:11:52.296936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.072 [2024-10-09 03:11:52.311881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.072 [2024-10-09 03:11:52.311928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.072 [2024-10-09 03:11:52.327818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.072 [2024-10-09 03:11:52.327864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.072 [2024-10-09 03:11:52.337270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.072 [2024-10-09 03:11:52.337317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.072 [2024-10-09 03:11:52.352546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.072 [2024-10-09 03:11:52.352593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.072 [2024-10-09 03:11:52.367880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.072 [2024-10-09 03:11:52.367927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.331 [2024-10-09 03:11:52.377268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.331 [2024-10-09 03:11:52.377314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.331 [2024-10-09 03:11:52.393173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.331 [2024-10-09 03:11:52.393220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.331 [2024-10-09 03:11:52.410799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.331 [2024-10-09 03:11:52.410845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.331 [2024-10-09 03:11:52.420767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.331 [2024-10-09 03:11:52.420812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.331 [2024-10-09 03:11:52.435584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.331 [2024-10-09 03:11:52.435631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.331 [2024-10-09 03:11:52.445672] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.331 [2024-10-09 03:11:52.445706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.331 [2024-10-09 03:11:52.460905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.331 [2024-10-09 03:11:52.460952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.331 [2024-10-09 03:11:52.470027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.331 [2024-10-09 03:11:52.470099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.331 [2024-10-09 03:11:52.485312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.331 [2024-10-09 03:11:52.485357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.331 [2024-10-09 03:11:52.497038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.331 [2024-10-09 03:11:52.497112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.331 [2024-10-09 03:11:52.506175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.331 [2024-10-09 03:11:52.506222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.331 [2024-10-09 03:11:52.517245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.331 [2024-10-09 03:11:52.517292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.331 [2024-10-09 03:11:52.527383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.331 [2024-10-09 03:11:52.527445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.331 [2024-10-09 03:11:52.537454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.331 [2024-10-09 03:11:52.537500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.331 [2024-10-09 03:11:52.547685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.331 [2024-10-09 03:11:52.547732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.331 [2024-10-09 03:11:52.562575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.331 [2024-10-09 03:11:52.562623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.331 [2024-10-09 03:11:52.571962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.331 [2024-10-09 03:11:52.572009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.331 [2024-10-09 03:11:52.587620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.331 [2024-10-09 03:11:52.587666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.331 [2024-10-09 03:11:52.597635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.331 [2024-10-09 03:11:52.597683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.331 [2024-10-09 03:11:52.612144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.331 [2024-10-09 03:11:52.612191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.331 [2024-10-09 03:11:52.621286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.331 [2024-10-09 03:11:52.621332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.591 [2024-10-09 03:11:52.636300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.591 [2024-10-09 03:11:52.636347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.591 [2024-10-09 03:11:52.653050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.591 [2024-10-09 03:11:52.653127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.591 [2024-10-09 03:11:52.668198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.591 [2024-10-09 03:11:52.668244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.591 [2024-10-09 03:11:52.676650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.591 [2024-10-09 03:11:52.676684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.591 [2024-10-09 03:11:52.689655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.591 [2024-10-09 03:11:52.689689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.591 [2024-10-09 03:11:52.699824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.591 [2024-10-09 03:11:52.699889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.591 [2024-10-09 03:11:52.710480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.591 [2024-10-09 03:11:52.710527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.591 [2024-10-09 03:11:52.722448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.591 [2024-10-09 03:11:52.722493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.591 [2024-10-09 03:11:52.733798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.591 [2024-10-09 03:11:52.733846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.591 [2024-10-09 03:11:52.743349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.591 [2024-10-09 03:11:52.743395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.591 [2024-10-09 03:11:52.754997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.591 [2024-10-09 03:11:52.755043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.591 [2024-10-09 03:11:52.765462] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.591 [2024-10-09 03:11:52.765508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.591 [2024-10-09 03:11:52.775988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.591 [2024-10-09 03:11:52.776033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.591 [2024-10-09 03:11:52.786641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.591 [2024-10-09 03:11:52.786687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.591 [2024-10-09 03:11:52.798786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.591 [2024-10-09 03:11:52.798833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.591 [2024-10-09 03:11:52.815561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.591 [2024-10-09 03:11:52.815608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.591 [2024-10-09 03:11:52.825669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.591 [2024-10-09 03:11:52.825716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.591 [2024-10-09 03:11:52.840635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.591 [2024-10-09 03:11:52.840683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.591 [2024-10-09 03:11:52.850023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.591 [2024-10-09 03:11:52.850109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.591 [2024-10-09 03:11:52.863956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.591 [2024-10-09 03:11:52.864002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.591 11923.80 IOPS, 93.15 MiB/s [2024-10-09T03:11:52.894Z] [2024-10-09 03:11:52.873823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.591 [2024-10-09 03:11:52.873873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.591 00:09:09.591 Latency(us) 00:09:09.591 [2024-10-09T03:11:52.894Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.591 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:09.591 Nvme1n1 : 5.01 11925.78 93.17 0.00 0.00 10719.31 4468.36 18707.55 00:09:09.591 [2024-10-09T03:11:52.894Z] =================================================================================================================== 00:09:09.591 [2024-10-09T03:11:52.894Z] Total : 11925.78 93.17 0.00 0.00 10719.31 4468.36 18707.55 00:09:09.591 [2024-10-09 03:11:52.883833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.591 [2024-10-09 03:11:52.883876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.591 [2024-10-09 03:11:52.891831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.591 [2024-10-09 03:11:52.891876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.851 [2024-10-09 03:11:52.899828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.851 [2024-10-09 03:11:52.899869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.851 [2024-10-09 03:11:52.911866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.851 [2024-10-09 03:11:52.911916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.851 [2024-10-09 03:11:52.919837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.851 [2024-10-09 03:11:52.919880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.851 [2024-10-09 03:11:52.931850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.851 [2024-10-09 03:11:52.931897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.851 [2024-10-09 03:11:52.939837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.851 [2024-10-09 03:11:52.939878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.851 [2024-10-09 03:11:52.947853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.851 [2024-10-09 03:11:52.947882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.851 [2024-10-09 03:11:52.959883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.851 [2024-10-09 03:11:52.959932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.851 [2024-10-09 03:11:52.967870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.851 [2024-10-09 03:11:52.967913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.851 [2024-10-09 03:11:52.975856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.851 [2024-10-09 03:11:52.975901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.851 [2024-10-09 03:11:52.983873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.851 [2024-10-09 03:11:52.983918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.851 [2024-10-09 03:11:52.991889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.851 [2024-10-09 03:11:52.991916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.851 [2024-10-09 03:11:52.999873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.851 [2024-10-09 03:11:52.999899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.851 [2024-10-09 03:11:53.007865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.851 [2024-10-09 03:11:53.007893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.851 [2024-10-09 03:11:53.015871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.851 [2024-10-09 03:11:53.015912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.851 [2024-10-09 03:11:53.023877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.851 [2024-10-09 03:11:53.023916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.851 [2024-10-09 03:11:53.031873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.851 [2024-10-09 03:11:53.031897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.851 [2024-10-09 03:11:53.039877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.851 [2024-10-09 03:11:53.039901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.851 [2024-10-09 03:11:53.047874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.851 [2024-10-09 03:11:53.047916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.851 [2024-10-09 03:11:53.055909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.851 [2024-10-09 03:11:53.055937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.851 [2024-10-09 03:11:53.063882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.851 [2024-10-09 03:11:53.063908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.851 [2024-10-09 03:11:53.071871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.851 [2024-10-09 03:11:53.071911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.851 [2024-10-09 03:11:53.079891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.851 [2024-10-09 03:11:53.079917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.851 [2024-10-09 03:11:53.087920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.851 [2024-10-09 03:11:53.087951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.851 [2024-10-09 03:11:53.095894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.851 [2024-10-09 03:11:53.095919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.851 [2024-10-09 03:11:53.103892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.851 [2024-10-09 03:11:53.103917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.851 [2024-10-09 03:11:53.111895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.851 [2024-10-09 03:11:53.111919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.851 [2024-10-09 03:11:53.119907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.851 [2024-10-09 03:11:53.119931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.851 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65618) - No such process 00:09:09.851 03:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65618 00:09:09.851 03:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.851 03:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.851 03:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.851 03:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.851 03:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:09.851 03:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.851 03:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.851 delay0 00:09:09.851 03:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.851 03:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:09.851 03:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.851 03:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.851 03:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.851 03:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:09:10.110 [2024-10-09 03:11:53.315529] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:16.675 Initializing NVMe Controllers 00:09:16.675 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:09:16.675 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:16.675 Initialization complete. Launching workers. 00:09:16.675 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 180 00:09:16.675 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 467, failed to submit 33 00:09:16.675 success 338, unsuccessful 129, failed 0 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:16.675 rmmod nvme_tcp 00:09:16.675 rmmod nvme_fabrics 00:09:16.675 rmmod nvme_keyring 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 65462 ']' 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 65462 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 65462 ']' 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 65462 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65462 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:16.675 killing process with pid 65462 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65462' 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 65462 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 65462 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.675 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.935 03:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:09:16.935 00:09:16.935 real 0m25.278s 00:09:16.935 user 0m40.062s 00:09:16.935 sys 0m7.613s 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:16.935 ************************************ 00:09:16.935 END TEST nvmf_zcopy 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:16.935 ************************************ 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:16.935 ************************************ 00:09:16.935 START TEST nvmf_nmic 00:09:16.935 ************************************ 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:16.935 * Looking for test storage... 00:09:16.935 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.935 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:17.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.196 --rc genhtml_branch_coverage=1 00:09:17.196 --rc genhtml_function_coverage=1 00:09:17.196 --rc genhtml_legend=1 00:09:17.196 --rc geninfo_all_blocks=1 00:09:17.196 --rc geninfo_unexecuted_blocks=1 00:09:17.196 00:09:17.196 ' 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:17.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.196 --rc genhtml_branch_coverage=1 00:09:17.196 --rc genhtml_function_coverage=1 00:09:17.196 --rc genhtml_legend=1 00:09:17.196 --rc geninfo_all_blocks=1 00:09:17.196 --rc geninfo_unexecuted_blocks=1 00:09:17.196 00:09:17.196 ' 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:17.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.196 --rc genhtml_branch_coverage=1 00:09:17.196 --rc genhtml_function_coverage=1 00:09:17.196 --rc genhtml_legend=1 00:09:17.196 --rc geninfo_all_blocks=1 00:09:17.196 --rc geninfo_unexecuted_blocks=1 00:09:17.196 00:09:17.196 ' 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:17.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.196 --rc genhtml_branch_coverage=1 00:09:17.196 --rc genhtml_function_coverage=1 00:09:17.196 --rc genhtml_legend=1 00:09:17.196 --rc geninfo_all_blocks=1 00:09:17.196 --rc geninfo_unexecuted_blocks=1 00:09:17.196 00:09:17.196 ' 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:17.196 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@458 -- # nvmf_veth_init 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:17.196 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:17.196 Cannot find device "nvmf_init_br" 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:17.197 Cannot find device "nvmf_init_br2" 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:17.197 Cannot find device "nvmf_tgt_br" 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:17.197 Cannot find device "nvmf_tgt_br2" 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:17.197 Cannot find device "nvmf_init_br" 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:17.197 Cannot find device "nvmf_init_br2" 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:17.197 Cannot find device "nvmf_tgt_br" 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:17.197 Cannot find device "nvmf_tgt_br2" 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:17.197 Cannot find device "nvmf_br" 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:17.197 Cannot find device "nvmf_init_if" 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:17.197 Cannot find device "nvmf_init_if2" 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:17.197 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:17.197 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:17.197 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:17.457 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:17.457 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:09:17.457 00:09:17.457 --- 10.0.0.3 ping statistics --- 00:09:17.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.457 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:17.457 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:17.457 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:09:17.457 00:09:17.457 --- 10.0.0.4 ping statistics --- 00:09:17.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.457 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:17.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:17.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:09:17.457 00:09:17.457 --- 10.0.0.1 ping statistics --- 00:09:17.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.457 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:17.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:17.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:09:17.457 00:09:17.457 --- 10.0.0.2 ping statistics --- 00:09:17.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.457 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # return 0 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=66004 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 66004 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 66004 ']' 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:17.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:17.457 03:12:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.457 [2024-10-09 03:12:00.720869] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:09:17.457 [2024-10-09 03:12:00.720986] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.716 [2024-10-09 03:12:00.858821] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:17.716 [2024-10-09 03:12:00.961456] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:17.716 [2024-10-09 03:12:00.961519] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:17.716 [2024-10-09 03:12:00.961534] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:17.716 [2024-10-09 03:12:00.961545] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:17.716 [2024-10-09 03:12:00.961554] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:17.716 [2024-10-09 03:12:00.962930] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.716 [2024-10-09 03:12:00.963112] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:17.716 [2024-10-09 03:12:00.963165] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:17.716 [2024-10-09 03:12:00.963168] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.975 [2024-10-09 03:12:01.021901] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:18.542 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:18.542 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:18.542 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:18.542 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:18.542 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.542 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:18.542 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:18.542 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.542 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.542 [2024-10-09 03:12:01.834204] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:18.542 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.542 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:18.542 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.542 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.801 Malloc0 00:09:18.801 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.801 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:18.801 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.801 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.801 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.801 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:18.801 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.801 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.801 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.802 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:18.802 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.802 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.802 [2024-10-09 03:12:01.889230] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:18.802 test case1: single bdev can't be used in multiple subsystems 00:09:18.802 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.802 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:18.802 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:18.802 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.802 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.802 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.802 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:09:18.802 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.802 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.802 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.802 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:18.802 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:18.802 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.802 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.802 [2024-10-09 03:12:01.917055] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:18.802 [2024-10-09 03:12:01.917119] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:18.802 [2024-10-09 03:12:01.917147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.802 request: 00:09:18.802 { 00:09:18.802 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:18.802 "namespace": { 00:09:18.802 "bdev_name": "Malloc0", 00:09:18.802 "no_auto_visible": false 00:09:18.802 }, 00:09:18.802 "method": "nvmf_subsystem_add_ns", 00:09:18.802 "req_id": 1 00:09:18.802 } 00:09:18.802 Got JSON-RPC error response 00:09:18.802 response: 00:09:18.802 { 00:09:18.802 "code": -32602, 00:09:18.802 "message": "Invalid parameters" 00:09:18.802 } 00:09:18.802 Adding namespace failed - expected result. 00:09:18.802 test case2: host connect to nvmf target in multiple paths 00:09:18.802 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:18.802 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:18.802 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:18.802 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:18.802 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:18.802 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:09:18.802 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.802 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.802 [2024-10-09 03:12:01.929193] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:09:18.802 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.802 03:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid=cb2c30f2-294c-46db-807f-ce0b3b357918 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:18.802 03:12:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid=cb2c30f2-294c-46db-807f-ce0b3b357918 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:09:19.061 03:12:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:19.061 03:12:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:19.061 03:12:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:19.061 03:12:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:19.061 03:12:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:20.963 03:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:20.963 03:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:20.963 03:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:20.963 03:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:20.963 03:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:20.963 03:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:20.963 03:12:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:20.963 [global] 00:09:20.963 thread=1 00:09:20.963 invalidate=1 00:09:20.963 rw=write 00:09:20.963 time_based=1 00:09:20.963 runtime=1 00:09:20.963 ioengine=libaio 00:09:20.963 direct=1 00:09:20.963 bs=4096 00:09:20.963 iodepth=1 00:09:20.963 norandommap=0 00:09:20.963 numjobs=1 00:09:20.963 00:09:20.963 verify_dump=1 00:09:20.963 verify_backlog=512 00:09:20.963 verify_state_save=0 00:09:20.963 do_verify=1 00:09:20.963 verify=crc32c-intel 00:09:20.963 [job0] 00:09:20.963 filename=/dev/nvme0n1 00:09:21.222 Could not set queue depth (nvme0n1) 00:09:21.222 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:21.222 fio-3.35 00:09:21.222 Starting 1 thread 00:09:22.601 00:09:22.601 job0: (groupid=0, jobs=1): err= 0: pid=66101: Wed Oct 9 03:12:05 2024 00:09:22.601 read: IOPS=2900, BW=11.3MiB/s (11.9MB/s)(11.3MiB/1001msec) 00:09:22.601 slat (nsec): min=10404, max=74019, avg=14841.48, stdev=5673.60 00:09:22.601 clat (usec): min=130, max=6568, avg=181.84, stdev=133.08 00:09:22.601 lat (usec): min=142, max=6592, avg=196.68, stdev=133.67 00:09:22.601 clat percentiles (usec): 00:09:22.601 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 159], 00:09:22.601 | 30.00th=[ 165], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 182], 00:09:22.601 | 70.00th=[ 188], 80.00th=[ 196], 90.00th=[ 206], 95.00th=[ 217], 00:09:22.601 | 99.00th=[ 243], 99.50th=[ 277], 99.90th=[ 1020], 99.95th=[ 3064], 00:09:22.601 | 99.99th=[ 6587] 00:09:22.601 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:22.601 slat (nsec): min=15303, max=99103, avg=21829.96, stdev=7945.49 00:09:22.601 clat (usec): min=81, max=7085, avg=114.65, stdev=150.32 00:09:22.601 lat (usec): min=97, max=7126, avg=136.48, stdev=151.25 00:09:22.601 clat percentiles (usec): 00:09:22.601 | 1.00th=[ 86], 5.00th=[ 90], 10.00th=[ 93], 20.00th=[ 97], 00:09:22.601 | 30.00th=[ 100], 40.00th=[ 103], 50.00th=[ 106], 60.00th=[ 111], 00:09:22.601 | 70.00th=[ 116], 80.00th=[ 123], 90.00th=[ 135], 95.00th=[ 143], 00:09:22.601 | 99.00th=[ 163], 99.50th=[ 188], 99.90th=[ 1139], 99.95th=[ 3949], 00:09:22.601 | 99.99th=[ 7111] 00:09:22.601 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:09:22.601 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:22.601 lat (usec) : 100=15.40%, 250=84.13%, 500=0.33%, 750=0.02% 00:09:22.601 lat (msec) : 2=0.03%, 4=0.05%, 10=0.03% 00:09:22.601 cpu : usr=1.90%, sys=9.10%, ctx=5975, majf=0, minf=5 00:09:22.601 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:22.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.601 issued rwts: total=2903,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.601 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:22.601 00:09:22.601 Run status group 0 (all jobs): 00:09:22.601 READ: bw=11.3MiB/s (11.9MB/s), 11.3MiB/s-11.3MiB/s (11.9MB/s-11.9MB/s), io=11.3MiB (11.9MB), run=1001-1001msec 00:09:22.601 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:22.601 00:09:22.601 Disk stats (read/write): 00:09:22.601 nvme0n1: ios=2610/2757, merge=0/0, ticks=503/334, in_queue=837, util=90.58% 00:09:22.601 03:12:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:22.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:22.601 03:12:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:22.601 03:12:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:22.601 03:12:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:22.601 03:12:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.601 03:12:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:22.601 03:12:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.601 03:12:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:22.601 03:12:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:22.601 03:12:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:22.601 03:12:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:22.601 03:12:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:22.601 03:12:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:22.601 03:12:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:22.601 03:12:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:22.601 03:12:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:22.601 rmmod nvme_tcp 00:09:22.601 rmmod nvme_fabrics 00:09:22.601 rmmod nvme_keyring 00:09:22.601 03:12:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:22.601 03:12:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:22.601 03:12:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:22.601 03:12:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 66004 ']' 00:09:22.601 03:12:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 66004 00:09:22.601 03:12:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 66004 ']' 00:09:22.601 03:12:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 66004 00:09:22.601 03:12:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:22.601 03:12:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:22.601 03:12:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66004 00:09:22.601 killing process with pid 66004 00:09:22.601 03:12:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:22.601 03:12:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:22.601 03:12:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66004' 00:09:22.601 03:12:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 66004 00:09:22.601 03:12:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 66004 00:09:22.860 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:22.860 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:22.860 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:22.860 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:22.860 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:09:22.860 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:09:22.860 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:22.860 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:22.860 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:22.860 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:22.860 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:22.860 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:22.860 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:22.860 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:22.860 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:22.860 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:22.860 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:22.860 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:23.119 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:23.119 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:23.119 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:23.119 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:23.119 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:23.119 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.119 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.119 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.119 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:09:23.119 00:09:23.119 real 0m6.217s 00:09:23.119 user 0m19.141s 00:09:23.119 sys 0m2.398s 00:09:23.119 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.119 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:23.119 ************************************ 00:09:23.119 END TEST nvmf_nmic 00:09:23.119 ************************************ 00:09:23.119 03:12:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:23.119 03:12:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:23.119 03:12:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.119 03:12:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:23.119 ************************************ 00:09:23.119 START TEST nvmf_fio_target 00:09:23.119 ************************************ 00:09:23.119 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:23.119 * Looking for test storage... 00:09:23.119 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:23.119 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:23.119 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:09:23.119 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:23.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.380 --rc genhtml_branch_coverage=1 00:09:23.380 --rc genhtml_function_coverage=1 00:09:23.380 --rc genhtml_legend=1 00:09:23.380 --rc geninfo_all_blocks=1 00:09:23.380 --rc geninfo_unexecuted_blocks=1 00:09:23.380 00:09:23.380 ' 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:23.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.380 --rc genhtml_branch_coverage=1 00:09:23.380 --rc genhtml_function_coverage=1 00:09:23.380 --rc genhtml_legend=1 00:09:23.380 --rc geninfo_all_blocks=1 00:09:23.380 --rc geninfo_unexecuted_blocks=1 00:09:23.380 00:09:23.380 ' 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:23.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.380 --rc genhtml_branch_coverage=1 00:09:23.380 --rc genhtml_function_coverage=1 00:09:23.380 --rc genhtml_legend=1 00:09:23.380 --rc geninfo_all_blocks=1 00:09:23.380 --rc geninfo_unexecuted_blocks=1 00:09:23.380 00:09:23.380 ' 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:23.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.380 --rc genhtml_branch_coverage=1 00:09:23.380 --rc genhtml_function_coverage=1 00:09:23.380 --rc genhtml_legend=1 00:09:23.380 --rc geninfo_all_blocks=1 00:09:23.380 --rc geninfo_unexecuted_blocks=1 00:09:23.380 00:09:23.380 ' 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:23.380 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:23.381 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@458 -- # nvmf_veth_init 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:23.381 Cannot find device "nvmf_init_br" 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:23.381 Cannot find device "nvmf_init_br2" 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:23.381 Cannot find device "nvmf_tgt_br" 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:23.381 Cannot find device "nvmf_tgt_br2" 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:23.381 Cannot find device "nvmf_init_br" 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:23.381 Cannot find device "nvmf_init_br2" 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:23.381 Cannot find device "nvmf_tgt_br" 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:23.381 Cannot find device "nvmf_tgt_br2" 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:23.381 Cannot find device "nvmf_br" 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:23.381 Cannot find device "nvmf_init_if" 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:23.381 Cannot find device "nvmf_init_if2" 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:23.381 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:23.381 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:23.381 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:23.641 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:23.641 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:09:23.641 00:09:23.641 --- 10.0.0.3 ping statistics --- 00:09:23.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.641 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:23.641 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:23.641 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:09:23.641 00:09:23.641 --- 10.0.0.4 ping statistics --- 00:09:23.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.641 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:23.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:23.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:09:23.641 00:09:23.641 --- 10.0.0.1 ping statistics --- 00:09:23.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.641 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:23.641 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:23.641 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:09:23.641 00:09:23.641 --- 10.0.0.2 ping statistics --- 00:09:23.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.641 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # return 0 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=66328 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 66328 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 66328 ']' 00:09:23.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:23.641 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.900 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:23.900 03:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:23.900 [2024-10-09 03:12:06.984937] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:09:23.900 [2024-10-09 03:12:06.985021] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.900 [2024-10-09 03:12:07.118031] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:24.158 [2024-10-09 03:12:07.209708] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.158 [2024-10-09 03:12:07.210030] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.158 [2024-10-09 03:12:07.210236] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.158 [2024-10-09 03:12:07.210366] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.158 [2024-10-09 03:12:07.210402] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.158 [2024-10-09 03:12:07.211758] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.158 [2024-10-09 03:12:07.211892] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:24.158 [2024-10-09 03:12:07.211954] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:24.158 [2024-10-09 03:12:07.211956] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.158 [2024-10-09 03:12:07.266936] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:24.158 03:12:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:24.158 03:12:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:09:24.158 03:12:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:24.158 03:12:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:24.158 03:12:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:24.158 03:12:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.158 03:12:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:24.419 [2024-10-09 03:12:07.680694] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.419 03:12:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:24.986 03:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:24.986 03:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:25.245 03:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:25.245 03:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:25.504 03:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:25.504 03:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:25.763 03:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:25.763 03:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:26.022 03:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:26.281 03:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:26.281 03:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:26.540 03:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:26.540 03:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:26.799 03:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:26.799 03:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:27.058 03:12:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:27.316 03:12:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:27.316 03:12:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:27.574 03:12:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:27.574 03:12:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:27.832 03:12:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:28.090 [2024-10-09 03:12:11.178806] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:28.090 03:12:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:28.348 03:12:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:28.606 03:12:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid=cb2c30f2-294c-46db-807f-ce0b3b357918 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:28.864 03:12:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:28.864 03:12:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:28.864 03:12:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:28.864 03:12:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:28.864 03:12:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:28.864 03:12:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:30.764 03:12:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:30.764 03:12:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:30.764 03:12:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:30.764 03:12:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:30.764 03:12:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:30.764 03:12:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:30.764 03:12:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:30.764 [global] 00:09:30.764 thread=1 00:09:30.764 invalidate=1 00:09:30.764 rw=write 00:09:30.764 time_based=1 00:09:30.764 runtime=1 00:09:30.764 ioengine=libaio 00:09:30.764 direct=1 00:09:30.764 bs=4096 00:09:30.764 iodepth=1 00:09:30.764 norandommap=0 00:09:30.764 numjobs=1 00:09:30.764 00:09:30.764 verify_dump=1 00:09:30.764 verify_backlog=512 00:09:30.764 verify_state_save=0 00:09:30.764 do_verify=1 00:09:30.764 verify=crc32c-intel 00:09:30.764 [job0] 00:09:30.764 filename=/dev/nvme0n1 00:09:30.764 [job1] 00:09:30.764 filename=/dev/nvme0n2 00:09:30.764 [job2] 00:09:30.764 filename=/dev/nvme0n3 00:09:30.764 [job3] 00:09:30.764 filename=/dev/nvme0n4 00:09:30.764 Could not set queue depth (nvme0n1) 00:09:30.764 Could not set queue depth (nvme0n2) 00:09:30.764 Could not set queue depth (nvme0n3) 00:09:30.764 Could not set queue depth (nvme0n4) 00:09:31.023 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:31.023 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:31.023 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:31.023 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:31.023 fio-3.35 00:09:31.023 Starting 4 threads 00:09:32.399 00:09:32.399 job0: (groupid=0, jobs=1): err= 0: pid=66511: Wed Oct 9 03:12:15 2024 00:09:32.399 read: IOPS=938, BW=3752KiB/s (3842kB/s)(3756KiB/1001msec) 00:09:32.399 slat (usec): min=16, max=104, avg=29.77, stdev= 9.84 00:09:32.399 clat (usec): min=307, max=921, avg=526.04, stdev=101.33 00:09:32.399 lat (usec): min=341, max=946, avg=555.80, stdev=104.34 00:09:32.399 clat percentiles (usec): 00:09:32.399 | 1.00th=[ 347], 5.00th=[ 396], 10.00th=[ 416], 20.00th=[ 441], 00:09:32.399 | 30.00th=[ 461], 40.00th=[ 482], 50.00th=[ 502], 60.00th=[ 529], 00:09:32.399 | 70.00th=[ 578], 80.00th=[ 619], 90.00th=[ 668], 95.00th=[ 717], 00:09:32.399 | 99.00th=[ 799], 99.50th=[ 807], 99.90th=[ 922], 99.95th=[ 922], 00:09:32.399 | 99.99th=[ 922] 00:09:32.399 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:32.399 slat (usec): min=16, max=356, avg=37.01, stdev=19.34 00:09:32.399 clat (usec): min=164, max=775, avg=423.44, stdev=113.11 00:09:32.399 lat (usec): min=185, max=827, avg=460.45, stdev=119.11 00:09:32.399 clat percentiles (usec): 00:09:32.399 | 1.00th=[ 208], 5.00th=[ 253], 10.00th=[ 273], 20.00th=[ 306], 00:09:32.399 | 30.00th=[ 347], 40.00th=[ 392], 50.00th=[ 429], 60.00th=[ 457], 00:09:32.399 | 70.00th=[ 494], 80.00th=[ 529], 90.00th=[ 570], 95.00th=[ 603], 00:09:32.399 | 99.00th=[ 652], 99.50th=[ 668], 99.90th=[ 717], 99.95th=[ 775], 00:09:32.399 | 99.99th=[ 775] 00:09:32.399 bw ( KiB/s): min= 4096, max= 4096, per=17.61%, avg=4096.00, stdev= 0.00, samples=1 00:09:32.399 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:32.399 lat (usec) : 250=2.39%, 500=58.79%, 750=37.29%, 1000=1.53% 00:09:32.399 cpu : usr=1.90%, sys=5.50%, ctx=1963, majf=0, minf=9 00:09:32.399 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:32.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.399 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.399 issued rwts: total=939,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.399 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:32.399 job1: (groupid=0, jobs=1): err= 0: pid=66512: Wed Oct 9 03:12:15 2024 00:09:32.399 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:32.399 slat (nsec): min=10424, max=77591, avg=13696.14, stdev=4973.57 00:09:32.399 clat (usec): min=162, max=2917, avg=238.80, stdev=70.17 00:09:32.399 lat (usec): min=175, max=2930, avg=252.49, stdev=70.37 00:09:32.400 clat percentiles (usec): 00:09:32.400 | 1.00th=[ 184], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 215], 00:09:32.400 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 241], 00:09:32.400 | 70.00th=[ 247], 80.00th=[ 258], 90.00th=[ 273], 95.00th=[ 285], 00:09:32.400 | 99.00th=[ 326], 99.50th=[ 326], 99.90th=[ 562], 99.95th=[ 1336], 00:09:32.400 | 99.99th=[ 2933] 00:09:32.400 write: IOPS=2477, BW=9910KiB/s (10.1MB/s)(9920KiB/1001msec); 0 zone resets 00:09:32.400 slat (nsec): min=14844, max=72189, avg=20329.12, stdev=6774.38 00:09:32.400 clat (usec): min=101, max=484, avg=171.54, stdev=31.82 00:09:32.400 lat (usec): min=118, max=515, avg=191.87, stdev=32.90 00:09:32.400 clat percentiles (usec): 00:09:32.400 | 1.00th=[ 116], 5.00th=[ 128], 10.00th=[ 137], 20.00th=[ 147], 00:09:32.400 | 30.00th=[ 155], 40.00th=[ 163], 50.00th=[ 169], 60.00th=[ 176], 00:09:32.400 | 70.00th=[ 184], 80.00th=[ 194], 90.00th=[ 208], 95.00th=[ 221], 00:09:32.400 | 99.00th=[ 255], 99.50th=[ 302], 99.90th=[ 457], 99.95th=[ 465], 00:09:32.400 | 99.99th=[ 486] 00:09:32.400 bw ( KiB/s): min= 9760, max= 9760, per=41.97%, avg=9760.00, stdev= 0.00, samples=1 00:09:32.400 iops : min= 2440, max= 2440, avg=2440.00, stdev= 0.00, samples=1 00:09:32.400 lat (usec) : 250=86.97%, 500=12.96%, 750=0.02% 00:09:32.400 lat (msec) : 2=0.02%, 4=0.02% 00:09:32.400 cpu : usr=2.10%, sys=5.80%, ctx=4529, majf=0, minf=9 00:09:32.400 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:32.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.400 issued rwts: total=2048,2480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.400 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:32.400 job2: (groupid=0, jobs=1): err= 0: pid=66513: Wed Oct 9 03:12:15 2024 00:09:32.400 read: IOPS=1021, BW=4088KiB/s (4186kB/s)(4096KiB/1002msec) 00:09:32.400 slat (nsec): min=19687, max=87668, avg=28132.60, stdev=10098.65 00:09:32.400 clat (usec): min=229, max=8664, avg=530.64, stdev=468.72 00:09:32.400 lat (usec): min=267, max=8693, avg=558.77, stdev=469.02 00:09:32.400 clat percentiles (usec): 00:09:32.400 | 1.00th=[ 359], 5.00th=[ 392], 10.00th=[ 408], 20.00th=[ 429], 00:09:32.400 | 30.00th=[ 445], 40.00th=[ 457], 50.00th=[ 469], 60.00th=[ 486], 00:09:32.400 | 70.00th=[ 498], 80.00th=[ 523], 90.00th=[ 627], 95.00th=[ 734], 00:09:32.400 | 99.00th=[ 1090], 99.50th=[ 3687], 99.90th=[ 7504], 99.95th=[ 8717], 00:09:32.400 | 99.99th=[ 8717] 00:09:32.400 write: IOPS=1294, BW=5178KiB/s (5302kB/s)(5188KiB/1002msec); 0 zone resets 00:09:32.400 slat (nsec): min=27749, max=80305, avg=35334.03, stdev=9237.34 00:09:32.400 clat (usec): min=143, max=1290, avg=290.27, stdev=94.51 00:09:32.400 lat (usec): min=174, max=1336, avg=325.61, stdev=94.48 00:09:32.400 clat percentiles (usec): 00:09:32.400 | 1.00th=[ 155], 5.00th=[ 167], 10.00th=[ 178], 20.00th=[ 194], 00:09:32.400 | 30.00th=[ 210], 40.00th=[ 239], 50.00th=[ 302], 60.00th=[ 330], 00:09:32.400 | 70.00th=[ 351], 80.00th=[ 375], 90.00th=[ 404], 95.00th=[ 433], 00:09:32.400 | 99.00th=[ 494], 99.50th=[ 523], 99.90th=[ 578], 99.95th=[ 1287], 00:09:32.400 | 99.99th=[ 1287] 00:09:32.400 bw ( KiB/s): min= 4416, max= 5952, per=22.29%, avg=5184.00, stdev=1086.12, samples=2 00:09:32.400 iops : min= 1104, max= 1488, avg=1296.00, stdev=271.53, samples=2 00:09:32.400 lat (usec) : 250=23.78%, 500=62.82%, 750=11.33%, 1000=1.38% 00:09:32.400 lat (msec) : 2=0.34%, 4=0.22%, 10=0.13% 00:09:32.400 cpu : usr=1.60%, sys=5.59%, ctx=2321, majf=0, minf=3 00:09:32.400 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:32.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.400 issued rwts: total=1024,1297,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.400 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:32.400 job3: (groupid=0, jobs=1): err= 0: pid=66514: Wed Oct 9 03:12:15 2024 00:09:32.400 read: IOPS=938, BW=3752KiB/s (3842kB/s)(3756KiB/1001msec) 00:09:32.400 slat (nsec): min=9077, max=65536, avg=16735.52, stdev=5664.36 00:09:32.400 clat (usec): min=235, max=940, avg=540.55, stdev=105.76 00:09:32.400 lat (usec): min=260, max=955, avg=557.28, stdev=106.18 00:09:32.400 clat percentiles (usec): 00:09:32.400 | 1.00th=[ 355], 5.00th=[ 404], 10.00th=[ 424], 20.00th=[ 453], 00:09:32.400 | 30.00th=[ 474], 40.00th=[ 490], 50.00th=[ 519], 60.00th=[ 545], 00:09:32.400 | 70.00th=[ 594], 80.00th=[ 635], 90.00th=[ 685], 95.00th=[ 742], 00:09:32.400 | 99.00th=[ 816], 99.50th=[ 824], 99.90th=[ 938], 99.95th=[ 938], 00:09:32.400 | 99.99th=[ 938] 00:09:32.400 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:32.400 slat (usec): min=15, max=101, avg=29.77, stdev=10.92 00:09:32.400 clat (usec): min=159, max=1202, avg=431.36, stdev=120.81 00:09:32.400 lat (usec): min=185, max=1248, avg=461.13, stdev=122.21 00:09:32.400 clat percentiles (usec): 00:09:32.400 | 1.00th=[ 206], 5.00th=[ 258], 10.00th=[ 277], 20.00th=[ 310], 00:09:32.400 | 30.00th=[ 351], 40.00th=[ 392], 50.00th=[ 433], 60.00th=[ 461], 00:09:32.400 | 70.00th=[ 498], 80.00th=[ 545], 90.00th=[ 586], 95.00th=[ 627], 00:09:32.400 | 99.00th=[ 685], 99.50th=[ 709], 99.90th=[ 783], 99.95th=[ 1205], 00:09:32.400 | 99.99th=[ 1205] 00:09:32.400 bw ( KiB/s): min= 4096, max= 4096, per=17.61%, avg=4096.00, stdev= 0.00, samples=1 00:09:32.400 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:32.400 lat (usec) : 250=1.83%, 500=56.19%, 750=39.68%, 1000=2.24% 00:09:32.400 lat (msec) : 2=0.05% 00:09:32.400 cpu : usr=1.20%, sys=3.90%, ctx=1963, majf=0, minf=15 00:09:32.400 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:32.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.400 issued rwts: total=939,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.400 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:32.400 00:09:32.400 Run status group 0 (all jobs): 00:09:32.400 READ: bw=19.3MiB/s (20.2MB/s), 3752KiB/s-8184KiB/s (3842kB/s-8380kB/s), io=19.3MiB (20.3MB), run=1001-1002msec 00:09:32.400 WRITE: bw=22.7MiB/s (23.8MB/s), 4092KiB/s-9910KiB/s (4190kB/s-10.1MB/s), io=22.8MiB (23.9MB), run=1001-1002msec 00:09:32.400 00:09:32.400 Disk stats (read/write): 00:09:32.400 nvme0n1: ios=771/1024, merge=0/0, ticks=432/433, in_queue=865, util=90.27% 00:09:32.400 nvme0n2: ios=1903/2048, merge=0/0, ticks=472/372, in_queue=844, util=88.97% 00:09:32.400 nvme0n3: ios=988/1024, merge=0/0, ticks=578/311, in_queue=889, util=90.12% 00:09:32.400 nvme0n4: ios=721/1024, merge=0/0, ticks=352/412, in_queue=764, util=89.88% 00:09:32.400 03:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:32.400 [global] 00:09:32.400 thread=1 00:09:32.400 invalidate=1 00:09:32.400 rw=randwrite 00:09:32.400 time_based=1 00:09:32.400 runtime=1 00:09:32.400 ioengine=libaio 00:09:32.400 direct=1 00:09:32.400 bs=4096 00:09:32.400 iodepth=1 00:09:32.400 norandommap=0 00:09:32.400 numjobs=1 00:09:32.400 00:09:32.400 verify_dump=1 00:09:32.400 verify_backlog=512 00:09:32.400 verify_state_save=0 00:09:32.400 do_verify=1 00:09:32.400 verify=crc32c-intel 00:09:32.400 [job0] 00:09:32.400 filename=/dev/nvme0n1 00:09:32.400 [job1] 00:09:32.400 filename=/dev/nvme0n2 00:09:32.400 [job2] 00:09:32.400 filename=/dev/nvme0n3 00:09:32.400 [job3] 00:09:32.400 filename=/dev/nvme0n4 00:09:32.400 Could not set queue depth (nvme0n1) 00:09:32.400 Could not set queue depth (nvme0n2) 00:09:32.400 Could not set queue depth (nvme0n3) 00:09:32.400 Could not set queue depth (nvme0n4) 00:09:32.400 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.400 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.400 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.400 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.400 fio-3.35 00:09:32.400 Starting 4 threads 00:09:33.796 00:09:33.796 job0: (groupid=0, jobs=1): err= 0: pid=66573: Wed Oct 9 03:12:16 2024 00:09:33.796 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:09:33.796 slat (nsec): min=10249, max=59581, avg=18461.73, stdev=6192.59 00:09:33.796 clat (usec): min=246, max=2343, avg=414.90, stdev=103.32 00:09:33.796 lat (usec): min=275, max=2357, avg=433.36, stdev=102.40 00:09:33.796 clat percentiles (usec): 00:09:33.796 | 1.00th=[ 281], 5.00th=[ 302], 10.00th=[ 318], 20.00th=[ 338], 00:09:33.796 | 30.00th=[ 359], 40.00th=[ 379], 50.00th=[ 396], 60.00th=[ 424], 00:09:33.796 | 70.00th=[ 457], 80.00th=[ 490], 90.00th=[ 529], 95.00th=[ 553], 00:09:33.796 | 99.00th=[ 619], 99.50th=[ 627], 99.90th=[ 906], 99.95th=[ 2343], 00:09:33.796 | 99.99th=[ 2343] 00:09:33.797 write: IOPS=1437, BW=5750KiB/s (5888kB/s)(5756KiB/1001msec); 0 zone resets 00:09:33.797 slat (usec): min=14, max=111, avg=30.74, stdev=10.44 00:09:33.797 clat (usec): min=113, max=2859, avg=351.84, stdev=122.10 00:09:33.797 lat (usec): min=145, max=2877, avg=382.58, stdev=123.43 00:09:33.797 clat percentiles (usec): 00:09:33.797 | 1.00th=[ 217], 5.00th=[ 239], 10.00th=[ 251], 20.00th=[ 269], 00:09:33.797 | 30.00th=[ 285], 40.00th=[ 302], 50.00th=[ 318], 60.00th=[ 343], 00:09:33.797 | 70.00th=[ 388], 80.00th=[ 433], 90.00th=[ 486], 95.00th=[ 529], 00:09:33.797 | 99.00th=[ 668], 99.50th=[ 709], 99.90th=[ 1303], 99.95th=[ 2868], 00:09:33.797 | 99.99th=[ 2868] 00:09:33.797 bw ( KiB/s): min= 4312, max= 4312, per=18.69%, avg=4312.00, stdev= 0.00, samples=1 00:09:33.797 iops : min= 1078, max= 1078, avg=1078.00, stdev= 0.00, samples=1 00:09:33.797 lat (usec) : 250=5.36%, 500=82.83%, 750=11.45%, 1000=0.16% 00:09:33.797 lat (msec) : 2=0.12%, 4=0.08% 00:09:33.797 cpu : usr=1.10%, sys=5.80%, ctx=2464, majf=0, minf=13 00:09:33.797 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:33.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.797 issued rwts: total=1024,1439,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.797 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:33.797 job1: (groupid=0, jobs=1): err= 0: pid=66574: Wed Oct 9 03:12:16 2024 00:09:33.797 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:09:33.797 slat (nsec): min=9873, max=90727, avg=23350.55, stdev=11185.94 00:09:33.797 clat (usec): min=181, max=6563, avg=456.91, stdev=369.99 00:09:33.797 lat (usec): min=194, max=6586, avg=480.26, stdev=370.89 00:09:33.797 clat percentiles (usec): 00:09:33.797 | 1.00th=[ 262], 5.00th=[ 306], 10.00th=[ 330], 20.00th=[ 359], 00:09:33.797 | 30.00th=[ 379], 40.00th=[ 396], 50.00th=[ 416], 60.00th=[ 441], 00:09:33.797 | 70.00th=[ 465], 80.00th=[ 494], 90.00th=[ 529], 95.00th=[ 562], 00:09:33.797 | 99.00th=[ 979], 99.50th=[ 3785], 99.90th=[ 4883], 99.95th=[ 6587], 00:09:33.797 | 99.99th=[ 6587] 00:09:33.797 write: IOPS=1296, BW=5187KiB/s (5311kB/s)(5192KiB/1001msec); 0 zone resets 00:09:33.797 slat (usec): min=17, max=193, avg=38.26, stdev=16.27 00:09:33.797 clat (usec): min=21, max=602, avg=347.23, stdev=83.25 00:09:33.797 lat (usec): min=122, max=649, avg=385.49, stdev=84.88 00:09:33.797 clat percentiles (usec): 00:09:33.797 | 1.00th=[ 159], 5.00th=[ 229], 10.00th=[ 262], 20.00th=[ 285], 00:09:33.797 | 30.00th=[ 302], 40.00th=[ 314], 50.00th=[ 330], 60.00th=[ 351], 00:09:33.797 | 70.00th=[ 392], 80.00th=[ 424], 90.00th=[ 465], 95.00th=[ 494], 00:09:33.797 | 99.00th=[ 545], 99.50th=[ 553], 99.90th=[ 562], 99.95th=[ 603], 00:09:33.797 | 99.99th=[ 603] 00:09:33.797 bw ( KiB/s): min= 4096, max= 4096, per=17.76%, avg=4096.00, stdev= 0.00, samples=1 00:09:33.797 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:33.797 lat (usec) : 50=0.13%, 100=0.04%, 250=4.57%, 500=84.67%, 750=10.12% 00:09:33.797 lat (usec) : 1000=0.04% 00:09:33.797 lat (msec) : 2=0.04%, 4=0.22%, 10=0.17% 00:09:33.797 cpu : usr=1.30%, sys=6.60%, ctx=2336, majf=0, minf=11 00:09:33.797 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:33.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.797 issued rwts: total=1024,1298,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.797 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:33.797 job2: (groupid=0, jobs=1): err= 0: pid=66575: Wed Oct 9 03:12:16 2024 00:09:33.797 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:33.797 slat (nsec): min=10602, max=84729, avg=18821.39, stdev=6356.62 00:09:33.797 clat (usec): min=187, max=801, avg=320.38, stdev=74.70 00:09:33.797 lat (usec): min=201, max=817, avg=339.20, stdev=76.15 00:09:33.797 clat percentiles (usec): 00:09:33.797 | 1.00th=[ 208], 5.00th=[ 225], 10.00th=[ 235], 20.00th=[ 253], 00:09:33.797 | 30.00th=[ 269], 40.00th=[ 289], 50.00th=[ 310], 60.00th=[ 330], 00:09:33.797 | 70.00th=[ 351], 80.00th=[ 383], 90.00th=[ 416], 95.00th=[ 461], 00:09:33.797 | 99.00th=[ 537], 99.50th=[ 545], 99.90th=[ 717], 99.95th=[ 799], 00:09:33.797 | 99.99th=[ 799] 00:09:33.797 write: IOPS=1597, BW=6390KiB/s (6543kB/s)(6396KiB/1001msec); 0 zone resets 00:09:33.797 slat (usec): min=14, max=127, avg=29.92, stdev=10.51 00:09:33.797 clat (usec): min=128, max=894, avg=265.05, stdev=72.25 00:09:33.797 lat (usec): min=151, max=933, avg=294.97, stdev=71.45 00:09:33.797 clat percentiles (usec): 00:09:33.797 | 1.00th=[ 149], 5.00th=[ 174], 10.00th=[ 186], 20.00th=[ 206], 00:09:33.797 | 30.00th=[ 221], 40.00th=[ 237], 50.00th=[ 253], 60.00th=[ 273], 00:09:33.797 | 70.00th=[ 293], 80.00th=[ 318], 90.00th=[ 355], 95.00th=[ 400], 00:09:33.797 | 99.00th=[ 486], 99.50th=[ 510], 99.90th=[ 562], 99.95th=[ 898], 00:09:33.797 | 99.99th=[ 898] 00:09:33.797 bw ( KiB/s): min= 8192, max= 8192, per=35.51%, avg=8192.00, stdev= 0.00, samples=1 00:09:33.797 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:33.797 lat (usec) : 250=33.78%, 500=64.85%, 750=1.31%, 1000=0.06% 00:09:33.797 cpu : usr=1.60%, sys=6.40%, ctx=3136, majf=0, minf=15 00:09:33.797 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:33.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.797 issued rwts: total=1536,1599,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.797 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:33.797 job3: (groupid=0, jobs=1): err= 0: pid=66576: Wed Oct 9 03:12:16 2024 00:09:33.797 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:09:33.797 slat (nsec): min=9818, max=67647, avg=17425.59, stdev=7898.47 00:09:33.797 clat (usec): min=265, max=2424, avg=416.02, stdev=100.31 00:09:33.797 lat (usec): min=278, max=2444, avg=433.45, stdev=103.61 00:09:33.797 clat percentiles (usec): 00:09:33.797 | 1.00th=[ 285], 5.00th=[ 306], 10.00th=[ 322], 20.00th=[ 347], 00:09:33.797 | 30.00th=[ 363], 40.00th=[ 383], 50.00th=[ 404], 60.00th=[ 424], 00:09:33.797 | 70.00th=[ 453], 80.00th=[ 482], 90.00th=[ 519], 95.00th=[ 545], 00:09:33.797 | 99.00th=[ 611], 99.50th=[ 627], 99.90th=[ 947], 99.95th=[ 2409], 00:09:33.797 | 99.99th=[ 2409] 00:09:33.797 write: IOPS=1435, BW=5742KiB/s (5880kB/s)(5748KiB/1001msec); 0 zone resets 00:09:33.797 slat (usec): min=12, max=123, avg=28.35, stdev=14.58 00:09:33.797 clat (usec): min=200, max=2946, avg=354.80, stdev=117.22 00:09:33.797 lat (usec): min=221, max=2970, avg=383.15, stdev=123.49 00:09:33.797 clat percentiles (usec): 00:09:33.797 | 1.00th=[ 227], 5.00th=[ 247], 10.00th=[ 260], 20.00th=[ 277], 00:09:33.797 | 30.00th=[ 297], 40.00th=[ 310], 50.00th=[ 326], 60.00th=[ 347], 00:09:33.797 | 70.00th=[ 383], 80.00th=[ 433], 90.00th=[ 486], 95.00th=[ 529], 00:09:33.797 | 99.00th=[ 635], 99.50th=[ 685], 99.90th=[ 1450], 99.95th=[ 2933], 00:09:33.797 | 99.99th=[ 2933] 00:09:33.797 bw ( KiB/s): min= 4312, max= 4312, per=18.69%, avg=4312.00, stdev= 0.00, samples=1 00:09:33.797 iops : min= 1078, max= 1078, avg=1078.00, stdev= 0.00, samples=1 00:09:33.797 lat (usec) : 250=3.49%, 500=85.66%, 750=10.61%, 1000=0.12% 00:09:33.797 lat (msec) : 2=0.04%, 4=0.08% 00:09:33.797 cpu : usr=1.50%, sys=4.90%, ctx=2461, majf=0, minf=7 00:09:33.797 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:33.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.797 issued rwts: total=1024,1437,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.797 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:33.797 00:09:33.797 Run status group 0 (all jobs): 00:09:33.797 READ: bw=18.0MiB/s (18.9MB/s), 4092KiB/s-6138KiB/s (4190kB/s-6285kB/s), io=18.0MiB (18.9MB), run=1001-1001msec 00:09:33.797 WRITE: bw=22.5MiB/s (23.6MB/s), 5187KiB/s-6390KiB/s (5311kB/s-6543kB/s), io=22.6MiB (23.6MB), run=1001-1001msec 00:09:33.797 00:09:33.797 Disk stats (read/write): 00:09:33.797 nvme0n1: ios=1074/1043, merge=0/0, ticks=486/374, in_queue=860, util=89.57% 00:09:33.797 nvme0n2: ios=993/1024, merge=0/0, ticks=454/368, in_queue=822, util=87.75% 00:09:33.797 nvme0n3: ios=1283/1536, merge=0/0, ticks=449/416, in_queue=865, util=90.21% 00:09:33.797 nvme0n4: ios=1024/1043, merge=0/0, ticks=411/363, in_queue=774, util=89.73% 00:09:33.797 03:12:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:33.797 [global] 00:09:33.797 thread=1 00:09:33.797 invalidate=1 00:09:33.797 rw=write 00:09:33.797 time_based=1 00:09:33.797 runtime=1 00:09:33.797 ioengine=libaio 00:09:33.797 direct=1 00:09:33.797 bs=4096 00:09:33.797 iodepth=128 00:09:33.797 norandommap=0 00:09:33.797 numjobs=1 00:09:33.797 00:09:33.797 verify_dump=1 00:09:33.797 verify_backlog=512 00:09:33.797 verify_state_save=0 00:09:33.797 do_verify=1 00:09:33.797 verify=crc32c-intel 00:09:33.797 [job0] 00:09:33.797 filename=/dev/nvme0n1 00:09:33.797 [job1] 00:09:33.797 filename=/dev/nvme0n2 00:09:33.797 [job2] 00:09:33.797 filename=/dev/nvme0n3 00:09:33.797 [job3] 00:09:33.797 filename=/dev/nvme0n4 00:09:33.797 Could not set queue depth (nvme0n1) 00:09:33.797 Could not set queue depth (nvme0n2) 00:09:33.797 Could not set queue depth (nvme0n3) 00:09:33.797 Could not set queue depth (nvme0n4) 00:09:33.797 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:33.797 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:33.797 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:33.797 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:33.797 fio-3.35 00:09:33.797 Starting 4 threads 00:09:35.173 00:09:35.173 job0: (groupid=0, jobs=1): err= 0: pid=66630: Wed Oct 9 03:12:18 2024 00:09:35.173 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:09:35.173 slat (usec): min=4, max=5860, avg=130.12, stdev=534.52 00:09:35.173 clat (usec): min=9914, max=23990, avg=17190.65, stdev=1782.00 00:09:35.173 lat (usec): min=9933, max=24049, avg=17320.77, stdev=1835.74 00:09:35.173 clat percentiles (usec): 00:09:35.173 | 1.00th=[12911], 5.00th=[14484], 10.00th=[15139], 20.00th=[15926], 00:09:35.173 | 30.00th=[16319], 40.00th=[16712], 50.00th=[16909], 60.00th=[17433], 00:09:35.173 | 70.00th=[17957], 80.00th=[18482], 90.00th=[19268], 95.00th=[20579], 00:09:35.173 | 99.00th=[21890], 99.50th=[22152], 99.90th=[23200], 99.95th=[23725], 00:09:35.173 | 99.99th=[23987] 00:09:35.173 write: IOPS=4002, BW=15.6MiB/s (16.4MB/s)(15.7MiB/1002msec); 0 zone resets 00:09:35.173 slat (usec): min=13, max=5239, avg=124.76, stdev=595.72 00:09:35.173 clat (usec): min=409, max=22738, avg=16155.22, stdev=1908.98 00:09:35.173 lat (usec): min=5010, max=22761, avg=16279.98, stdev=1976.34 00:09:35.173 clat percentiles (usec): 00:09:35.173 | 1.00th=[ 6063], 5.00th=[14353], 10.00th=[14615], 20.00th=[15139], 00:09:35.173 | 30.00th=[15533], 40.00th=[15795], 50.00th=[16057], 60.00th=[16319], 00:09:35.173 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17957], 95.00th=[19006], 00:09:35.173 | 99.00th=[21365], 99.50th=[21627], 99.90th=[22676], 99.95th=[22676], 00:09:35.173 | 99.99th=[22676] 00:09:35.173 bw ( KiB/s): min=14680, max=16416, per=34.82%, avg=15548.00, stdev=1227.54, samples=2 00:09:35.173 iops : min= 3670, max= 4104, avg=3887.00, stdev=306.88, samples=2 00:09:35.173 lat (usec) : 500=0.01% 00:09:35.173 lat (msec) : 10=0.63%, 20=94.05%, 50=5.31% 00:09:35.173 cpu : usr=4.20%, sys=12.09%, ctx=319, majf=0, minf=1 00:09:35.173 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:35.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:35.173 issued rwts: total=3584,4011,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.173 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:35.173 job1: (groupid=0, jobs=1): err= 0: pid=66633: Wed Oct 9 03:12:18 2024 00:09:35.173 read: IOPS=1540, BW=6161KiB/s (6308kB/s)(6216KiB/1009msec) 00:09:35.173 slat (usec): min=5, max=11284, avg=288.23, stdev=1221.86 00:09:35.173 clat (usec): min=8003, max=47977, avg=34739.04, stdev=4821.35 00:09:35.173 lat (usec): min=10633, max=48240, avg=35027.27, stdev=4906.14 00:09:35.173 clat percentiles (usec): 00:09:35.173 | 1.00th=[13698], 5.00th=[28181], 10.00th=[29754], 20.00th=[31589], 00:09:35.173 | 30.00th=[32637], 40.00th=[34341], 50.00th=[34866], 60.00th=[35390], 00:09:35.173 | 70.00th=[36439], 80.00th=[38011], 90.00th=[40633], 95.00th=[43254], 00:09:35.173 | 99.00th=[45876], 99.50th=[45876], 99.90th=[47973], 99.95th=[47973], 00:09:35.173 | 99.99th=[47973] 00:09:35.173 write: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec); 0 zone resets 00:09:35.173 slat (usec): min=14, max=11656, avg=260.32, stdev=904.41 00:09:35.173 clat (usec): min=16064, max=48237, avg=35547.71, stdev=4277.00 00:09:35.173 lat (usec): min=16096, max=48261, avg=35808.03, stdev=4302.31 00:09:35.173 clat percentiles (usec): 00:09:35.173 | 1.00th=[21627], 5.00th=[27657], 10.00th=[31589], 20.00th=[32375], 00:09:35.173 | 30.00th=[34341], 40.00th=[35390], 50.00th=[35914], 60.00th=[36439], 00:09:35.173 | 70.00th=[36963], 80.00th=[37487], 90.00th=[40109], 95.00th=[44303], 00:09:35.173 | 99.00th=[46924], 99.50th=[47973], 99.90th=[47973], 99.95th=[47973], 00:09:35.173 | 99.99th=[48497] 00:09:35.173 bw ( KiB/s): min= 7312, max= 8208, per=17.38%, avg=7760.00, stdev=633.57, samples=2 00:09:35.173 iops : min= 1828, max= 2052, avg=1940.00, stdev=158.39, samples=2 00:09:35.173 lat (msec) : 10=0.03%, 20=0.89%, 50=99.08% 00:09:35.173 cpu : usr=3.08%, sys=6.25%, ctx=284, majf=0, minf=7 00:09:35.173 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:09:35.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:35.173 issued rwts: total=1554,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.173 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:35.173 job2: (groupid=0, jobs=1): err= 0: pid=66634: Wed Oct 9 03:12:18 2024 00:09:35.173 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:09:35.173 slat (usec): min=7, max=12088, avg=155.37, stdev=779.57 00:09:35.173 clat (usec): min=14575, max=28036, avg=20611.70, stdev=2010.10 00:09:35.173 lat (usec): min=18682, max=28050, avg=20767.07, stdev=1861.34 00:09:35.173 clat percentiles (usec): 00:09:35.173 | 1.00th=[15664], 5.00th=[18744], 10.00th=[19268], 20.00th=[19530], 00:09:35.173 | 30.00th=[19792], 40.00th=[20055], 50.00th=[20055], 60.00th=[20317], 00:09:35.173 | 70.00th=[20841], 80.00th=[21365], 90.00th=[22152], 95.00th=[25560], 00:09:35.173 | 99.00th=[27919], 99.50th=[27919], 99.90th=[27919], 99.95th=[27919], 00:09:35.173 | 99.99th=[27919] 00:09:35.173 write: IOPS=3156, BW=12.3MiB/s (12.9MB/s)(12.4MiB/1004msec); 0 zone resets 00:09:35.173 slat (usec): min=10, max=9172, avg=156.45, stdev=728.34 00:09:35.173 clat (usec): min=298, max=24668, avg=19881.36, stdev=2311.21 00:09:35.173 lat (usec): min=4144, max=24920, avg=20037.80, stdev=2202.03 00:09:35.173 clat percentiles (usec): 00:09:35.173 | 1.00th=[ 5211], 5.00th=[16319], 10.00th=[18744], 20.00th=[19530], 00:09:35.173 | 30.00th=[19792], 40.00th=[20055], 50.00th=[20055], 60.00th=[20317], 00:09:35.173 | 70.00th=[20579], 80.00th=[20841], 90.00th=[21627], 95.00th=[22152], 00:09:35.173 | 99.00th=[24511], 99.50th=[24511], 99.90th=[24773], 99.95th=[24773], 00:09:35.173 | 99.99th=[24773] 00:09:35.173 bw ( KiB/s): min=12288, max=12312, per=27.54%, avg=12300.00, stdev=16.97, samples=2 00:09:35.173 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:09:35.173 lat (usec) : 500=0.02% 00:09:35.173 lat (msec) : 10=0.75%, 20=42.03%, 50=57.20% 00:09:35.173 cpu : usr=3.39%, sys=10.37%, ctx=196, majf=0, minf=4 00:09:35.173 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:35.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:35.173 issued rwts: total=3072,3169,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.173 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:35.173 job3: (groupid=0, jobs=1): err= 0: pid=66635: Wed Oct 9 03:12:18 2024 00:09:35.173 read: IOPS=1538, BW=6154KiB/s (6302kB/s)(6216KiB/1010msec) 00:09:35.173 slat (usec): min=4, max=10680, avg=283.41, stdev=1183.58 00:09:35.173 clat (usec): min=8029, max=48056, avg=34200.19, stdev=4591.26 00:09:35.173 lat (usec): min=10746, max=48354, avg=34483.61, stdev=4687.54 00:09:35.173 clat percentiles (usec): 00:09:35.173 | 1.00th=[13698], 5.00th=[28181], 10.00th=[29754], 20.00th=[31327], 00:09:35.173 | 30.00th=[32375], 40.00th=[33817], 50.00th=[34341], 60.00th=[34866], 00:09:35.173 | 70.00th=[35390], 80.00th=[36963], 90.00th=[39060], 95.00th=[42730], 00:09:35.173 | 99.00th=[45876], 99.50th=[45876], 99.90th=[47973], 99.95th=[47973], 00:09:35.173 | 99.99th=[47973] 00:09:35.173 write: IOPS=2027, BW=8111KiB/s (8306kB/s)(8192KiB/1010msec); 0 zone resets 00:09:35.173 slat (usec): min=15, max=13362, avg=264.81, stdev=951.48 00:09:35.173 clat (usec): min=16057, max=48352, avg=35977.75, stdev=4046.25 00:09:35.173 lat (usec): min=16104, max=48376, avg=36242.56, stdev=4077.58 00:09:35.173 clat percentiles (usec): 00:09:35.173 | 1.00th=[21627], 5.00th=[30802], 10.00th=[31589], 20.00th=[33424], 00:09:35.173 | 30.00th=[34866], 40.00th=[35390], 50.00th=[35914], 60.00th=[36439], 00:09:35.173 | 70.00th=[36963], 80.00th=[37487], 90.00th=[40109], 95.00th=[44303], 00:09:35.173 | 99.00th=[46924], 99.50th=[47973], 99.90th=[48497], 99.95th=[48497], 00:09:35.173 | 99.99th=[48497] 00:09:35.173 bw ( KiB/s): min= 7312, max= 8192, per=17.36%, avg=7752.00, stdev=622.25, samples=2 00:09:35.173 iops : min= 1828, max= 2048, avg=1938.00, stdev=155.56, samples=2 00:09:35.173 lat (msec) : 10=0.03%, 20=0.89%, 50=99.08% 00:09:35.173 cpu : usr=2.08%, sys=7.04%, ctx=275, majf=0, minf=8 00:09:35.173 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:09:35.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:35.173 issued rwts: total=1554,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.173 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:35.173 00:09:35.173 Run status group 0 (all jobs): 00:09:35.173 READ: bw=37.8MiB/s (39.6MB/s), 6154KiB/s-14.0MiB/s (6302kB/s-14.7MB/s), io=38.1MiB (40.0MB), run=1002-1010msec 00:09:35.173 WRITE: bw=43.6MiB/s (45.7MB/s), 8111KiB/s-15.6MiB/s (8306kB/s-16.4MB/s), io=44.0MiB (46.2MB), run=1002-1010msec 00:09:35.173 00:09:35.173 Disk stats (read/write): 00:09:35.173 nvme0n1: ios=3121/3456, merge=0/0, ticks=16751/15931, in_queue=32682, util=88.97% 00:09:35.173 nvme0n2: ios=1552/1543, merge=0/0, ticks=18007/16620, in_queue=34627, util=88.11% 00:09:35.173 nvme0n3: ios=2577/2784, merge=0/0, ticks=12012/12485, in_queue=24497, util=88.72% 00:09:35.173 nvme0n4: ios=1528/1543, merge=0/0, ticks=17657/17002, in_queue=34659, util=89.69% 00:09:35.173 03:12:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:35.173 [global] 00:09:35.173 thread=1 00:09:35.173 invalidate=1 00:09:35.173 rw=randwrite 00:09:35.173 time_based=1 00:09:35.173 runtime=1 00:09:35.174 ioengine=libaio 00:09:35.174 direct=1 00:09:35.174 bs=4096 00:09:35.174 iodepth=128 00:09:35.174 norandommap=0 00:09:35.174 numjobs=1 00:09:35.174 00:09:35.174 verify_dump=1 00:09:35.174 verify_backlog=512 00:09:35.174 verify_state_save=0 00:09:35.174 do_verify=1 00:09:35.174 verify=crc32c-intel 00:09:35.174 [job0] 00:09:35.174 filename=/dev/nvme0n1 00:09:35.174 [job1] 00:09:35.174 filename=/dev/nvme0n2 00:09:35.174 [job2] 00:09:35.174 filename=/dev/nvme0n3 00:09:35.174 [job3] 00:09:35.174 filename=/dev/nvme0n4 00:09:35.174 Could not set queue depth (nvme0n1) 00:09:35.174 Could not set queue depth (nvme0n2) 00:09:35.174 Could not set queue depth (nvme0n3) 00:09:35.174 Could not set queue depth (nvme0n4) 00:09:35.174 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:35.174 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:35.174 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:35.174 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:35.174 fio-3.35 00:09:35.174 Starting 4 threads 00:09:36.548 00:09:36.548 job0: (groupid=0, jobs=1): err= 0: pid=66692: Wed Oct 9 03:12:19 2024 00:09:36.548 read: IOPS=1759, BW=7040KiB/s (7209kB/s)(7096KiB/1008msec) 00:09:36.548 slat (usec): min=5, max=17544, avg=285.40, stdev=1187.19 00:09:36.548 clat (usec): min=3538, max=52910, avg=33721.33, stdev=6878.58 00:09:36.548 lat (usec): min=8282, max=52922, avg=34006.73, stdev=6896.01 00:09:36.548 clat percentiles (usec): 00:09:36.548 | 1.00th=[12780], 5.00th=[22676], 10.00th=[25822], 20.00th=[31065], 00:09:36.548 | 30.00th=[31851], 40.00th=[32900], 50.00th=[34341], 60.00th=[34866], 00:09:36.548 | 70.00th=[35390], 80.00th=[36439], 90.00th=[43779], 95.00th=[45876], 00:09:36.548 | 99.00th=[50070], 99.50th=[51643], 99.90th=[52691], 99.95th=[52691], 00:09:36.548 | 99.99th=[52691] 00:09:36.548 write: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec); 0 zone resets 00:09:36.548 slat (usec): min=5, max=13349, avg=234.65, stdev=1110.15 00:09:36.548 clat (usec): min=12938, max=50708, avg=33151.69, stdev=5402.77 00:09:36.548 lat (usec): min=12962, max=50971, avg=33386.35, stdev=5350.74 00:09:36.548 clat percentiles (usec): 00:09:36.548 | 1.00th=[16319], 5.00th=[23725], 10.00th=[26346], 20.00th=[30278], 00:09:36.548 | 30.00th=[31589], 40.00th=[32637], 50.00th=[33424], 60.00th=[34341], 00:09:36.548 | 70.00th=[35390], 80.00th=[36963], 90.00th=[39584], 95.00th=[40109], 00:09:36.548 | 99.00th=[45351], 99.50th=[49546], 99.90th=[50594], 99.95th=[50594], 00:09:36.548 | 99.99th=[50594] 00:09:36.548 bw ( KiB/s): min= 8175, max= 8192, per=18.31%, avg=8183.50, stdev=12.02, samples=2 00:09:36.549 iops : min= 2043, max= 2048, avg=2045.50, stdev= 3.54, samples=2 00:09:36.549 lat (msec) : 4=0.03%, 10=0.34%, 20=2.85%, 50=96.02%, 100=0.76% 00:09:36.549 cpu : usr=1.39%, sys=6.75%, ctx=420, majf=0, minf=5 00:09:36.549 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:09:36.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.549 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:36.549 issued rwts: total=1774,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.549 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:36.549 job1: (groupid=0, jobs=1): err= 0: pid=66693: Wed Oct 9 03:12:19 2024 00:09:36.549 read: IOPS=4064, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1006msec) 00:09:36.549 slat (usec): min=5, max=12859, avg=118.98, stdev=745.02 00:09:36.549 clat (usec): min=2257, max=30664, avg=16105.72, stdev=2953.68 00:09:36.549 lat (usec): min=5238, max=30689, avg=16224.70, stdev=2974.87 00:09:36.549 clat percentiles (usec): 00:09:36.549 | 1.00th=[ 8291], 5.00th=[10814], 10.00th=[14353], 20.00th=[15008], 00:09:36.549 | 30.00th=[15401], 40.00th=[15533], 50.00th=[15795], 60.00th=[16319], 00:09:36.549 | 70.00th=[16450], 80.00th=[16909], 90.00th=[17957], 95.00th=[19792], 00:09:36.549 | 99.00th=[28181], 99.50th=[29754], 99.90th=[30540], 99.95th=[30540], 00:09:36.549 | 99.99th=[30540] 00:09:36.549 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:09:36.549 slat (usec): min=5, max=14726, avg=117.46, stdev=714.13 00:09:36.549 clat (usec): min=3891, max=30580, avg=15028.88, stdev=2268.16 00:09:36.549 lat (usec): min=3905, max=30592, avg=15146.34, stdev=2188.09 00:09:36.549 clat percentiles (usec): 00:09:36.549 | 1.00th=[ 6587], 5.00th=[12911], 10.00th=[13435], 20.00th=[13829], 00:09:36.549 | 30.00th=[14091], 40.00th=[14484], 50.00th=[14877], 60.00th=[15270], 00:09:36.549 | 70.00th=[15795], 80.00th=[16581], 90.00th=[17171], 95.00th=[17695], 00:09:36.549 | 99.00th=[23725], 99.50th=[23987], 99.90th=[24249], 99.95th=[24249], 00:09:36.549 | 99.99th=[30540] 00:09:36.549 bw ( KiB/s): min=16384, max=16384, per=36.65%, avg=16384.00, stdev= 0.00, samples=2 00:09:36.549 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:09:36.549 lat (msec) : 4=0.11%, 10=2.52%, 20=94.01%, 50=3.36% 00:09:36.549 cpu : usr=4.08%, sys=11.44%, ctx=228, majf=0, minf=5 00:09:36.549 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:36.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.549 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:36.549 issued rwts: total=4089,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.549 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:36.549 job2: (groupid=0, jobs=1): err= 0: pid=66694: Wed Oct 9 03:12:19 2024 00:09:36.549 read: IOPS=1719, BW=6876KiB/s (7042kB/s)(6904KiB/1004msec) 00:09:36.549 slat (usec): min=7, max=14472, avg=267.97, stdev=1180.65 00:09:36.549 clat (usec): min=1896, max=48816, avg=33189.54, stdev=6401.51 00:09:36.549 lat (usec): min=3784, max=49566, avg=33457.51, stdev=6445.38 00:09:36.549 clat percentiles (usec): 00:09:36.549 | 1.00th=[ 9110], 5.00th=[24249], 10.00th=[26608], 20.00th=[30540], 00:09:36.549 | 30.00th=[31851], 40.00th=[32900], 50.00th=[33817], 60.00th=[34341], 00:09:36.549 | 70.00th=[35390], 80.00th=[36439], 90.00th=[39584], 95.00th=[43779], 00:09:36.549 | 99.00th=[47449], 99.50th=[48497], 99.90th=[49021], 99.95th=[49021], 00:09:36.549 | 99.99th=[49021] 00:09:36.549 write: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec); 0 zone resets 00:09:36.549 slat (usec): min=5, max=13979, avg=254.89, stdev=1196.74 00:09:36.549 clat (usec): min=18055, max=49895, avg=32974.64, stdev=4461.85 00:09:36.549 lat (usec): min=18079, max=52389, avg=33229.53, stdev=4512.14 00:09:36.549 clat percentiles (usec): 00:09:36.549 | 1.00th=[22938], 5.00th=[25297], 10.00th=[27657], 20.00th=[30278], 00:09:36.549 | 30.00th=[31327], 40.00th=[32113], 50.00th=[32900], 60.00th=[33817], 00:09:36.549 | 70.00th=[34866], 80.00th=[35914], 90.00th=[37487], 95.00th=[38536], 00:09:36.549 | 99.00th=[46924], 99.50th=[47973], 99.90th=[49021], 99.95th=[50070], 00:09:36.549 | 99.99th=[50070] 00:09:36.549 bw ( KiB/s): min= 8192, max= 8192, per=18.33%, avg=8192.00, stdev= 0.00, samples=2 00:09:36.549 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:09:36.549 lat (msec) : 2=0.03%, 4=0.13%, 10=1.03%, 20=0.72%, 50=98.09% 00:09:36.549 cpu : usr=2.19%, sys=5.88%, ctx=458, majf=0, minf=4 00:09:36.549 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:09:36.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.549 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:36.549 issued rwts: total=1726,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.549 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:36.549 job3: (groupid=0, jobs=1): err= 0: pid=66695: Wed Oct 9 03:12:19 2024 00:09:36.549 read: IOPS=2915, BW=11.4MiB/s (11.9MB/s)(11.5MiB/1008msec) 00:09:36.549 slat (usec): min=7, max=14613, avg=161.87, stdev=1107.37 00:09:36.549 clat (usec): min=3279, max=36917, avg=21912.48, stdev=2969.54 00:09:36.549 lat (usec): min=12336, max=41249, avg=22074.34, stdev=3015.44 00:09:36.549 clat percentiles (usec): 00:09:36.549 | 1.00th=[12911], 5.00th=[15139], 10.00th=[20317], 20.00th=[20841], 00:09:36.549 | 30.00th=[21365], 40.00th=[21890], 50.00th=[22152], 60.00th=[22414], 00:09:36.549 | 70.00th=[22676], 80.00th=[23200], 90.00th=[23987], 95.00th=[25560], 00:09:36.549 | 99.00th=[32637], 99.50th=[33817], 99.90th=[34341], 99.95th=[34341], 00:09:36.549 | 99.99th=[36963] 00:09:36.549 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec); 0 zone resets 00:09:36.549 slat (usec): min=10, max=15990, avg=163.04, stdev=1071.41 00:09:36.549 clat (usec): min=10107, max=27886, avg=20615.73, stdev=2146.57 00:09:36.549 lat (usec): min=13866, max=28027, avg=20778.77, stdev=1922.00 00:09:36.549 clat percentiles (usec): 00:09:36.549 | 1.00th=[12649], 5.00th=[17433], 10.00th=[18744], 20.00th=[19268], 00:09:36.549 | 30.00th=[19792], 40.00th=[20317], 50.00th=[20841], 60.00th=[21103], 00:09:36.549 | 70.00th=[21365], 80.00th=[21890], 90.00th=[22676], 95.00th=[23200], 00:09:36.549 | 99.00th=[27657], 99.50th=[27657], 99.90th=[27919], 99.95th=[27919], 00:09:36.549 | 99.99th=[27919] 00:09:36.549 bw ( KiB/s): min=12288, max=12288, per=27.49%, avg=12288.00, stdev= 0.00, samples=2 00:09:36.549 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:09:36.549 lat (msec) : 4=0.02%, 20=21.41%, 50=78.57% 00:09:36.549 cpu : usr=3.57%, sys=8.94%, ctx=129, majf=0, minf=3 00:09:36.549 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:36.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.549 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:36.549 issued rwts: total=2939,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.549 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:36.549 00:09:36.549 Run status group 0 (all jobs): 00:09:36.549 READ: bw=40.8MiB/s (42.8MB/s), 6876KiB/s-15.9MiB/s (7042kB/s-16.6MB/s), io=41.1MiB (43.1MB), run=1004-1008msec 00:09:36.549 WRITE: bw=43.7MiB/s (45.8MB/s), 8127KiB/s-15.9MiB/s (8322kB/s-16.7MB/s), io=44.0MiB (46.1MB), run=1004-1008msec 00:09:36.549 00:09:36.549 Disk stats (read/write): 00:09:36.549 nvme0n1: ios=1585/1768, merge=0/0, ticks=26062/26199, in_queue=52261, util=88.13% 00:09:36.549 nvme0n2: ios=3355/3584, merge=0/0, ticks=51015/50360, in_queue=101375, util=88.38% 00:09:36.549 nvme0n3: ios=1557/1675, merge=0/0, ticks=25047/24824, in_queue=49871, util=87.64% 00:09:36.549 nvme0n4: ios=2448/2560, merge=0/0, ticks=51588/50116, in_queue=101704, util=89.76% 00:09:36.549 03:12:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:36.549 03:12:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66709 00:09:36.549 03:12:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:36.549 03:12:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:36.549 [global] 00:09:36.549 thread=1 00:09:36.549 invalidate=1 00:09:36.549 rw=read 00:09:36.549 time_based=1 00:09:36.549 runtime=10 00:09:36.549 ioengine=libaio 00:09:36.549 direct=1 00:09:36.549 bs=4096 00:09:36.549 iodepth=1 00:09:36.549 norandommap=1 00:09:36.549 numjobs=1 00:09:36.549 00:09:36.549 [job0] 00:09:36.549 filename=/dev/nvme0n1 00:09:36.549 [job1] 00:09:36.549 filename=/dev/nvme0n2 00:09:36.549 [job2] 00:09:36.549 filename=/dev/nvme0n3 00:09:36.549 [job3] 00:09:36.549 filename=/dev/nvme0n4 00:09:36.549 Could not set queue depth (nvme0n1) 00:09:36.549 Could not set queue depth (nvme0n2) 00:09:36.549 Could not set queue depth (nvme0n3) 00:09:36.549 Could not set queue depth (nvme0n4) 00:09:36.549 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:36.549 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:36.549 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:36.549 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:36.549 fio-3.35 00:09:36.549 Starting 4 threads 00:09:39.832 03:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:39.832 fio: pid=66758, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:39.832 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=40513536, buflen=4096 00:09:39.832 03:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:40.091 fio: pid=66757, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:40.091 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=33640448, buflen=4096 00:09:40.091 03:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:40.091 03:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:40.349 fio: pid=66755, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:40.349 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=37441536, buflen=4096 00:09:40.349 03:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:40.349 03:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:40.608 fio: pid=66756, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:40.608 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=56205312, buflen=4096 00:09:40.608 00:09:40.608 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66755: Wed Oct 9 03:12:23 2024 00:09:40.608 read: IOPS=2592, BW=10.1MiB/s (10.6MB/s)(35.7MiB/3527msec) 00:09:40.608 slat (usec): min=7, max=12646, avg=23.32, stdev=207.63 00:09:40.608 clat (usec): min=3, max=4541, avg=360.78, stdev=94.43 00:09:40.608 lat (usec): min=155, max=12922, avg=384.11, stdev=227.56 00:09:40.608 clat percentiles (usec): 00:09:40.608 | 1.00th=[ 204], 5.00th=[ 235], 10.00th=[ 273], 20.00th=[ 318], 00:09:40.608 | 30.00th=[ 338], 40.00th=[ 351], 50.00th=[ 363], 60.00th=[ 375], 00:09:40.608 | 70.00th=[ 388], 80.00th=[ 404], 90.00th=[ 424], 95.00th=[ 449], 00:09:40.608 | 99.00th=[ 498], 99.50th=[ 603], 99.90th=[ 1123], 99.95th=[ 2057], 00:09:40.608 | 99.99th=[ 4555] 00:09:40.608 bw ( KiB/s): min= 9689, max=10336, per=23.54%, avg=10071.50, stdev=245.99, samples=6 00:09:40.608 iops : min= 2422, max= 2584, avg=2517.83, stdev=61.57, samples=6 00:09:40.608 lat (usec) : 4=0.01%, 250=7.48%, 500=91.52%, 750=0.66%, 1000=0.15% 00:09:40.608 lat (msec) : 2=0.11%, 4=0.04%, 10=0.01% 00:09:40.608 cpu : usr=1.08%, sys=4.37%, ctx=9151, majf=0, minf=1 00:09:40.608 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:40.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.608 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.608 issued rwts: total=9142,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:40.608 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:40.608 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66756: Wed Oct 9 03:12:23 2024 00:09:40.608 read: IOPS=3583, BW=14.0MiB/s (14.7MB/s)(53.6MiB/3830msec) 00:09:40.608 slat (usec): min=7, max=15754, avg=17.79, stdev=222.94 00:09:40.608 clat (usec): min=118, max=36370, avg=260.14, stdev=340.74 00:09:40.608 lat (usec): min=130, max=36399, avg=277.92, stdev=407.10 00:09:40.608 clat percentiles (usec): 00:09:40.608 | 1.00th=[ 139], 5.00th=[ 159], 10.00th=[ 178], 20.00th=[ 198], 00:09:40.608 | 30.00th=[ 210], 40.00th=[ 219], 50.00th=[ 229], 60.00th=[ 239], 00:09:40.608 | 70.00th=[ 258], 80.00th=[ 330], 90.00th=[ 388], 95.00th=[ 416], 00:09:40.608 | 99.00th=[ 469], 99.50th=[ 494], 99.90th=[ 2089], 99.95th=[ 3785], 00:09:40.608 | 99.99th=[ 7177] 00:09:40.608 bw ( KiB/s): min= 9852, max=16656, per=32.41%, avg=13868.71, stdev=2975.59, samples=7 00:09:40.608 iops : min= 2463, max= 4164, avg=3467.14, stdev=743.89, samples=7 00:09:40.608 lat (usec) : 250=66.76%, 500=32.80%, 750=0.22%, 1000=0.07% 00:09:40.608 lat (msec) : 2=0.04%, 4=0.07%, 10=0.02%, 50=0.01% 00:09:40.608 cpu : usr=1.18%, sys=4.31%, ctx=13730, majf=0, minf=2 00:09:40.608 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:40.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.608 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.608 issued rwts: total=13723,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:40.608 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:40.608 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66757: Wed Oct 9 03:12:23 2024 00:09:40.608 read: IOPS=2515, BW=9.82MiB/s (10.3MB/s)(32.1MiB/3266msec) 00:09:40.608 slat (usec): min=7, max=11729, avg=21.99, stdev=179.55 00:09:40.608 clat (usec): min=188, max=5783, avg=373.57, stdev=94.21 00:09:40.608 lat (usec): min=211, max=12041, avg=395.56, stdev=201.64 00:09:40.608 clat percentiles (usec): 00:09:40.608 | 1.00th=[ 273], 5.00th=[ 302], 10.00th=[ 314], 20.00th=[ 334], 00:09:40.608 | 30.00th=[ 347], 40.00th=[ 359], 50.00th=[ 371], 60.00th=[ 383], 00:09:40.608 | 70.00th=[ 392], 80.00th=[ 408], 90.00th=[ 433], 95.00th=[ 449], 00:09:40.608 | 99.00th=[ 490], 99.50th=[ 506], 99.90th=[ 1303], 99.95th=[ 1778], 00:09:40.608 | 99.99th=[ 5800] 00:09:40.608 bw ( KiB/s): min= 9852, max=10344, per=23.73%, avg=10155.17, stdev=183.31, samples=6 00:09:40.608 iops : min= 2463, max= 2586, avg=2538.67, stdev=45.94, samples=6 00:09:40.608 lat (usec) : 250=0.40%, 500=99.01%, 750=0.37%, 1000=0.06% 00:09:40.608 lat (msec) : 2=0.10%, 4=0.04%, 10=0.01% 00:09:40.608 cpu : usr=1.26%, sys=4.20%, ctx=8220, majf=0, minf=1 00:09:40.608 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:40.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.608 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.608 issued rwts: total=8214,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:40.608 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:40.608 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66758: Wed Oct 9 03:12:23 2024 00:09:40.608 read: IOPS=3315, BW=12.9MiB/s (13.6MB/s)(38.6MiB/2984msec) 00:09:40.608 slat (usec): min=7, max=107, avg=13.90, stdev= 4.56 00:09:40.608 clat (usec): min=186, max=2179, avg=286.46, stdev=77.93 00:09:40.608 lat (usec): min=200, max=2194, avg=300.36, stdev=77.84 00:09:40.608 clat percentiles (usec): 00:09:40.608 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 229], 00:09:40.608 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 253], 60.00th=[ 265], 00:09:40.608 | 70.00th=[ 314], 80.00th=[ 367], 90.00th=[ 404], 95.00th=[ 429], 00:09:40.608 | 99.00th=[ 474], 99.50th=[ 490], 99.90th=[ 766], 99.95th=[ 881], 00:09:40.608 | 99.99th=[ 2180] 00:09:40.608 bw ( KiB/s): min= 9852, max=15680, per=32.48%, avg=13898.40, stdev=2485.53, samples=5 00:09:40.608 iops : min= 2463, max= 3920, avg=3474.60, stdev=621.38, samples=5 00:09:40.608 lat (usec) : 250=48.03%, 500=51.63%, 750=0.23%, 1000=0.06% 00:09:40.608 lat (msec) : 2=0.03%, 4=0.01% 00:09:40.608 cpu : usr=0.91%, sys=3.99%, ctx=9893, majf=0, minf=2 00:09:40.608 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:40.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.608 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.608 issued rwts: total=9892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:40.608 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:40.608 00:09:40.608 Run status group 0 (all jobs): 00:09:40.608 READ: bw=41.8MiB/s (43.8MB/s), 9.82MiB/s-14.0MiB/s (10.3MB/s-14.7MB/s), io=160MiB (168MB), run=2984-3830msec 00:09:40.608 00:09:40.608 Disk stats (read/write): 00:09:40.608 nvme0n1: ios=8520/0, merge=0/0, ticks=3052/0, in_queue=3052, util=95.25% 00:09:40.608 nvme0n2: ios=12535/0, merge=0/0, ticks=3262/0, in_queue=3262, util=95.23% 00:09:40.608 nvme0n3: ios=7845/0, merge=0/0, ticks=2813/0, in_queue=2813, util=96.21% 00:09:40.608 nvme0n4: ios=9591/0, merge=0/0, ticks=2601/0, in_queue=2601, util=96.83% 00:09:40.608 03:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:40.608 03:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:40.866 03:12:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:40.866 03:12:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:41.125 03:12:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:41.125 03:12:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:41.382 03:12:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:41.382 03:12:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:41.640 03:12:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:41.640 03:12:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:41.899 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:41.899 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66709 00:09:41.899 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:41.899 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:41.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.899 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:41.899 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:09:41.899 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:41.899 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:42.157 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:42.157 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:42.157 nvmf hotplug test: fio failed as expected 00:09:42.157 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:09:42.157 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:42.157 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:42.157 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:42.157 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:42.157 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:42.157 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:42.157 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:42.157 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:42.157 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:42.157 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:42.417 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:42.417 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:42.417 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:42.417 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:42.417 rmmod nvme_tcp 00:09:42.417 rmmod nvme_fabrics 00:09:42.417 rmmod nvme_keyring 00:09:42.417 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:42.417 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:42.417 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:42.417 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 66328 ']' 00:09:42.417 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 66328 00:09:42.417 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 66328 ']' 00:09:42.417 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 66328 00:09:42.417 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:09:42.417 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:42.417 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66328 00:09:42.417 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:42.417 killing process with pid 66328 00:09:42.417 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:42.417 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66328' 00:09:42.417 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 66328 00:09:42.417 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 66328 00:09:42.676 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:42.676 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:42.676 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:42.676 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:42.676 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:09:42.676 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:42.676 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:09:42.676 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:42.676 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:42.676 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:42.676 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:42.676 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:42.676 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:42.676 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:42.676 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:42.676 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:42.676 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:42.676 03:12:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:42.935 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:42.935 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:42.935 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:42.935 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:42.935 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:42.935 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.935 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.935 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.935 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:09:42.935 ************************************ 00:09:42.935 END TEST nvmf_fio_target 00:09:42.935 ************************************ 00:09:42.935 00:09:42.935 real 0m19.810s 00:09:42.935 user 1m15.792s 00:09:42.935 sys 0m8.288s 00:09:42.935 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:42.935 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.935 03:12:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:42.935 03:12:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:42.935 03:12:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:42.935 03:12:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:42.935 ************************************ 00:09:42.935 START TEST nvmf_bdevio 00:09:42.935 ************************************ 00:09:42.935 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:43.195 * Looking for test storage... 00:09:43.195 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:43.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.195 --rc genhtml_branch_coverage=1 00:09:43.195 --rc genhtml_function_coverage=1 00:09:43.195 --rc genhtml_legend=1 00:09:43.195 --rc geninfo_all_blocks=1 00:09:43.195 --rc geninfo_unexecuted_blocks=1 00:09:43.195 00:09:43.195 ' 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:43.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.195 --rc genhtml_branch_coverage=1 00:09:43.195 --rc genhtml_function_coverage=1 00:09:43.195 --rc genhtml_legend=1 00:09:43.195 --rc geninfo_all_blocks=1 00:09:43.195 --rc geninfo_unexecuted_blocks=1 00:09:43.195 00:09:43.195 ' 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:43.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.195 --rc genhtml_branch_coverage=1 00:09:43.195 --rc genhtml_function_coverage=1 00:09:43.195 --rc genhtml_legend=1 00:09:43.195 --rc geninfo_all_blocks=1 00:09:43.195 --rc geninfo_unexecuted_blocks=1 00:09:43.195 00:09:43.195 ' 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:43.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.195 --rc genhtml_branch_coverage=1 00:09:43.195 --rc genhtml_function_coverage=1 00:09:43.195 --rc genhtml_legend=1 00:09:43.195 --rc geninfo_all_blocks=1 00:09:43.195 --rc geninfo_unexecuted_blocks=1 00:09:43.195 00:09:43.195 ' 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.195 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:43.196 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@458 -- # nvmf_veth_init 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:43.196 Cannot find device "nvmf_init_br" 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:43.196 Cannot find device "nvmf_init_br2" 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:43.196 Cannot find device "nvmf_tgt_br" 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:43.196 Cannot find device "nvmf_tgt_br2" 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:43.196 Cannot find device "nvmf_init_br" 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:43.196 Cannot find device "nvmf_init_br2" 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:43.196 Cannot find device "nvmf_tgt_br" 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:43.196 Cannot find device "nvmf_tgt_br2" 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:43.196 Cannot find device "nvmf_br" 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:09:43.196 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:43.455 Cannot find device "nvmf_init_if" 00:09:43.455 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:09:43.455 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:43.455 Cannot find device "nvmf_init_if2" 00:09:43.455 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:09:43.455 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:43.455 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:43.455 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:09:43.455 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:43.455 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:43.455 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:09:43.455 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:43.455 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:43.455 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:43.455 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:43.455 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:43.455 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:43.455 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:43.455 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:43.455 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:43.455 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:43.455 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:43.455 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:43.455 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:43.455 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:43.455 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:43.455 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:43.455 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:43.455 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:43.455 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:43.455 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:43.455 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:43.455 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:43.455 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:43.455 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:43.455 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:43.455 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:43.715 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:43.715 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:43.715 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:43.715 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:43.715 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:43.715 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:43.715 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:43.715 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:43.715 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:09:43.715 00:09:43.715 --- 10.0.0.3 ping statistics --- 00:09:43.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.715 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:09:43.715 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:43.715 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:43.715 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:09:43.715 00:09:43.715 --- 10.0.0.4 ping statistics --- 00:09:43.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.715 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:09:43.715 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:43.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:43.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:09:43.715 00:09:43.715 --- 10.0.0.1 ping statistics --- 00:09:43.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.715 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:09:43.715 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:43.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:43.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:09:43.715 00:09:43.715 --- 10.0.0.2 ping statistics --- 00:09:43.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.715 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:09:43.715 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:43.715 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # return 0 00:09:43.715 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:43.715 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:43.715 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:43.715 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:43.715 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:43.715 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:43.715 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:43.715 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:43.715 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:43.715 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:43.715 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.715 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=67077 00:09:43.715 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:43.715 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 67077 00:09:43.715 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 67077 ']' 00:09:43.715 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.715 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:43.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.715 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.715 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:43.715 03:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.715 [2024-10-09 03:12:26.882236] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:09:43.715 [2024-10-09 03:12:26.882820] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.973 [2024-10-09 03:12:27.026331] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:43.973 [2024-10-09 03:12:27.130868] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:43.973 [2024-10-09 03:12:27.130941] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:43.973 [2024-10-09 03:12:27.130956] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:43.973 [2024-10-09 03:12:27.130966] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:43.973 [2024-10-09 03:12:27.130975] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:43.973 [2024-10-09 03:12:27.132545] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:09:43.973 [2024-10-09 03:12:27.134110] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:09:43.973 [2024-10-09 03:12:27.134160] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:09:43.974 [2024-10-09 03:12:27.135017] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:43.974 [2024-10-09 03:12:27.196535] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:44.931 03:12:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:44.931 03:12:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:09:44.931 03:12:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:44.931 03:12:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:44.931 03:12:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:44.931 03:12:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:44.931 03:12:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:44.931 03:12:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.931 03:12:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:44.931 [2024-10-09 03:12:27.987796] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:44.931 03:12:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.931 03:12:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:44.931 03:12:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.931 03:12:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:44.931 Malloc0 00:09:44.931 03:12:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.931 03:12:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:44.931 03:12:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.931 03:12:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:44.931 03:12:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.931 03:12:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:44.931 03:12:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.931 03:12:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:44.931 03:12:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.931 03:12:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:44.931 03:12:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.931 03:12:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:44.931 [2024-10-09 03:12:28.044831] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:44.931 03:12:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.931 03:12:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:44.931 03:12:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:44.931 03:12:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:09:44.931 03:12:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:09:44.931 03:12:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:44.931 03:12:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:44.931 { 00:09:44.931 "params": { 00:09:44.931 "name": "Nvme$subsystem", 00:09:44.931 "trtype": "$TEST_TRANSPORT", 00:09:44.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:44.931 "adrfam": "ipv4", 00:09:44.931 "trsvcid": "$NVMF_PORT", 00:09:44.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:44.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:44.931 "hdgst": ${hdgst:-false}, 00:09:44.931 "ddgst": ${ddgst:-false} 00:09:44.931 }, 00:09:44.931 "method": "bdev_nvme_attach_controller" 00:09:44.931 } 00:09:44.931 EOF 00:09:44.931 )") 00:09:44.931 03:12:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:09:44.931 03:12:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:09:44.931 03:12:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:09:44.931 03:12:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:44.931 "params": { 00:09:44.931 "name": "Nvme1", 00:09:44.931 "trtype": "tcp", 00:09:44.931 "traddr": "10.0.0.3", 00:09:44.931 "adrfam": "ipv4", 00:09:44.931 "trsvcid": "4420", 00:09:44.931 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:44.931 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:44.931 "hdgst": false, 00:09:44.931 "ddgst": false 00:09:44.931 }, 00:09:44.931 "method": "bdev_nvme_attach_controller" 00:09:44.931 }' 00:09:44.931 [2024-10-09 03:12:28.110207] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:09:44.931 [2024-10-09 03:12:28.110335] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67113 ] 00:09:45.189 [2024-10-09 03:12:28.252975] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:45.189 [2024-10-09 03:12:28.405560] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.189 [2024-10-09 03:12:28.405441] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.189 [2024-10-09 03:12:28.405552] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:45.189 [2024-10-09 03:12:28.491353] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:45.448 I/O targets: 00:09:45.448 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:45.448 00:09:45.448 00:09:45.448 CUnit - A unit testing framework for C - Version 2.1-3 00:09:45.448 http://cunit.sourceforge.net/ 00:09:45.448 00:09:45.448 00:09:45.448 Suite: bdevio tests on: Nvme1n1 00:09:45.448 Test: blockdev write read block ...passed 00:09:45.448 Test: blockdev write zeroes read block ...passed 00:09:45.448 Test: blockdev write zeroes read no split ...passed 00:09:45.448 Test: blockdev write zeroes read split ...passed 00:09:45.448 Test: blockdev write zeroes read split partial ...passed 00:09:45.448 Test: blockdev reset ...[2024-10-09 03:12:28.656500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:09:45.448 [2024-10-09 03:12:28.656608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa82040 (9): Bad file descriptor 00:09:45.448 [2024-10-09 03:12:28.671310] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:45.448 passed 00:09:45.448 Test: blockdev write read 8 blocks ...passed 00:09:45.448 Test: blockdev write read size > 128k ...passed 00:09:45.448 Test: blockdev write read invalid size ...passed 00:09:45.448 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:45.448 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:45.448 Test: blockdev write read max offset ...passed 00:09:45.448 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:45.448 Test: blockdev writev readv 8 blocks ...passed 00:09:45.448 Test: blockdev writev readv 30 x 1block ...passed 00:09:45.448 Test: blockdev writev readv block ...passed 00:09:45.448 Test: blockdev writev readv size > 128k ...passed 00:09:45.448 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:45.448 Test: blockdev comparev and writev ...[2024-10-09 03:12:28.679895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:45.448 [2024-10-09 03:12:28.680129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:45.448 [2024-10-09 03:12:28.680303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:45.448 [2024-10-09 03:12:28.680411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:45.448 [2024-10-09 03:12:28.680870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:45.448 [2024-10-09 03:12:28.681037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:45.448 [2024-10-09 03:12:28.681191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:45.448 [2024-10-09 03:12:28.681286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:45.448 [2024-10-09 03:12:28.681787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:45.448 [2024-10-09 03:12:28.681907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:45.449 [2024-10-09 03:12:28.682017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:45.449 [2024-10-09 03:12:28.682187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:45.449 [2024-10-09 03:12:28.682663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:45.449 [2024-10-09 03:12:28.682776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:45.449 [2024-10-09 03:12:28.682888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:45.449 [2024-10-09 03:12:28.682975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:45.449 passed 00:09:45.449 Test: blockdev nvme passthru rw ...passed 00:09:45.449 Test: blockdev nvme passthru vendor specific ...[2024-10-09 03:12:28.684106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:45.449 [2024-10-09 03:12:28.684260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:45.449 [2024-10-09 03:12:28.684534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:45.449 [2024-10-09 03:12:28.684652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:45.449 [2024-10-09 03:12:28.684895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:45.449 [2024-10-09 03:12:28.685011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:45.449 [2024-10-09 03:12:28.685288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:45.449 [2024-10-09 03:12:28.685404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:45.449 passed 00:09:45.449 Test: blockdev nvme admin passthru ...passed 00:09:45.449 Test: blockdev copy ...passed 00:09:45.449 00:09:45.449 Run Summary: Type Total Ran Passed Failed Inactive 00:09:45.449 suites 1 1 n/a 0 0 00:09:45.449 tests 23 23 23 0 0 00:09:45.449 asserts 152 152 152 0 n/a 00:09:45.449 00:09:45.449 Elapsed time = 0.149 seconds 00:09:45.708 03:12:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:45.708 03:12:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.708 03:12:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:45.708 03:12:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.708 03:12:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:45.708 03:12:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:45.708 03:12:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:45.708 03:12:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:45.967 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:45.967 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:45.967 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:45.967 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:45.967 rmmod nvme_tcp 00:09:45.967 rmmod nvme_fabrics 00:09:45.967 rmmod nvme_keyring 00:09:45.967 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:45.967 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:45.967 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:45.967 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 67077 ']' 00:09:45.967 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 67077 00:09:45.967 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 67077 ']' 00:09:45.967 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 67077 00:09:45.967 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:09:45.967 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:45.967 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67077 00:09:45.967 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:09:45.967 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:09:45.967 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67077' 00:09:45.967 killing process with pid 67077 00:09:45.967 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 67077 00:09:45.967 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 67077 00:09:46.225 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:46.225 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:46.225 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:46.225 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:46.225 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:09:46.225 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:46.225 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:09:46.225 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:46.225 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:46.225 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:46.225 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:46.225 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:46.225 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:46.225 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:46.225 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:46.225 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:46.225 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:46.225 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:46.225 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:46.225 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:46.225 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:46.484 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:46.484 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:46.484 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.484 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.484 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.484 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:09:46.484 00:09:46.484 real 0m3.421s 00:09:46.484 user 0m10.560s 00:09:46.484 sys 0m0.997s 00:09:46.484 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:46.484 ************************************ 00:09:46.484 END TEST nvmf_bdevio 00:09:46.484 03:12:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:46.484 ************************************ 00:09:46.484 03:12:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:46.484 00:09:46.484 real 2m40.687s 00:09:46.484 user 7m3.189s 00:09:46.484 sys 0m52.853s 00:09:46.484 03:12:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:46.484 03:12:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:46.484 ************************************ 00:09:46.484 END TEST nvmf_target_core 00:09:46.484 ************************************ 00:09:46.484 03:12:29 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:46.484 03:12:29 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:46.484 03:12:29 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:46.484 03:12:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:46.484 ************************************ 00:09:46.484 START TEST nvmf_target_extra 00:09:46.484 ************************************ 00:09:46.484 03:12:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:46.484 * Looking for test storage... 00:09:46.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:46.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.744 --rc genhtml_branch_coverage=1 00:09:46.744 --rc genhtml_function_coverage=1 00:09:46.744 --rc genhtml_legend=1 00:09:46.744 --rc geninfo_all_blocks=1 00:09:46.744 --rc geninfo_unexecuted_blocks=1 00:09:46.744 00:09:46.744 ' 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:46.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.744 --rc genhtml_branch_coverage=1 00:09:46.744 --rc genhtml_function_coverage=1 00:09:46.744 --rc genhtml_legend=1 00:09:46.744 --rc geninfo_all_blocks=1 00:09:46.744 --rc geninfo_unexecuted_blocks=1 00:09:46.744 00:09:46.744 ' 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:46.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.744 --rc genhtml_branch_coverage=1 00:09:46.744 --rc genhtml_function_coverage=1 00:09:46.744 --rc genhtml_legend=1 00:09:46.744 --rc geninfo_all_blocks=1 00:09:46.744 --rc geninfo_unexecuted_blocks=1 00:09:46.744 00:09:46.744 ' 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:46.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.744 --rc genhtml_branch_coverage=1 00:09:46.744 --rc genhtml_function_coverage=1 00:09:46.744 --rc genhtml_legend=1 00:09:46.744 --rc geninfo_all_blocks=1 00:09:46.744 --rc geninfo_unexecuted_blocks=1 00:09:46.744 00:09:46.744 ' 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.744 03:12:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.745 03:12:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.745 03:12:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:46.745 03:12:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.745 03:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:46.745 03:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:46.745 03:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:46.745 03:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:46.745 03:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.745 03:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.745 03:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:46.745 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:46.745 03:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:46.745 03:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:46.745 03:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:46.745 03:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:46.745 03:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:46.745 03:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:09:46.745 03:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:46.745 03:12:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:46.745 03:12:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:46.745 03:12:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:46.745 ************************************ 00:09:46.745 START TEST nvmf_auth_target 00:09:46.745 ************************************ 00:09:46.745 03:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:46.745 * Looking for test storage... 00:09:46.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:46.745 03:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:46.745 03:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:09:46.745 03:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:47.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.005 --rc genhtml_branch_coverage=1 00:09:47.005 --rc genhtml_function_coverage=1 00:09:47.005 --rc genhtml_legend=1 00:09:47.005 --rc geninfo_all_blocks=1 00:09:47.005 --rc geninfo_unexecuted_blocks=1 00:09:47.005 00:09:47.005 ' 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:47.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.005 --rc genhtml_branch_coverage=1 00:09:47.005 --rc genhtml_function_coverage=1 00:09:47.005 --rc genhtml_legend=1 00:09:47.005 --rc geninfo_all_blocks=1 00:09:47.005 --rc geninfo_unexecuted_blocks=1 00:09:47.005 00:09:47.005 ' 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:47.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.005 --rc genhtml_branch_coverage=1 00:09:47.005 --rc genhtml_function_coverage=1 00:09:47.005 --rc genhtml_legend=1 00:09:47.005 --rc geninfo_all_blocks=1 00:09:47.005 --rc geninfo_unexecuted_blocks=1 00:09:47.005 00:09:47.005 ' 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:47.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.005 --rc genhtml_branch_coverage=1 00:09:47.005 --rc genhtml_function_coverage=1 00:09:47.005 --rc genhtml_legend=1 00:09:47.005 --rc geninfo_all_blocks=1 00:09:47.005 --rc geninfo_unexecuted_blocks=1 00:09:47.005 00:09:47.005 ' 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.005 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:47.006 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@458 -- # nvmf_veth_init 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:47.006 Cannot find device "nvmf_init_br" 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:47.006 Cannot find device "nvmf_init_br2" 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:47.006 Cannot find device "nvmf_tgt_br" 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:47.006 Cannot find device "nvmf_tgt_br2" 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:47.006 Cannot find device "nvmf_init_br" 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:47.006 Cannot find device "nvmf_init_br2" 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:47.006 Cannot find device "nvmf_tgt_br" 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:47.006 Cannot find device "nvmf_tgt_br2" 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:47.006 Cannot find device "nvmf_br" 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:47.006 Cannot find device "nvmf_init_if" 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:09:47.006 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:47.007 Cannot find device "nvmf_init_if2" 00:09:47.007 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:09:47.007 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:47.007 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:47.007 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:09:47.007 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:47.007 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:47.007 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:09:47.007 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:47.007 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:47.007 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:47.007 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:47.266 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:47.266 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.137 ms 00:09:47.266 00:09:47.266 --- 10.0.0.3 ping statistics --- 00:09:47.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.266 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:47.266 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:47.266 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.104 ms 00:09:47.266 00:09:47.266 --- 10.0.0.4 ping statistics --- 00:09:47.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.266 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:47.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:47.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:09:47.266 00:09:47.266 --- 10.0.0.1 ping statistics --- 00:09:47.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.266 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:47.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:47.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:09:47.266 00:09:47.266 --- 10.0.0.2 ping statistics --- 00:09:47.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.266 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # return 0 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:47.266 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:47.525 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:09:47.525 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:47.525 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:47.525 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.525 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=67400 00:09:47.525 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:09:47.525 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 67400 00:09:47.525 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 67400 ']' 00:09:47.525 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.525 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:47.525 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.525 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:47.525 03:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.463 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:48.463 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:09:48.463 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:48.463 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:48.463 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67432 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=115652390a66919c010a5c07414a7f0b993b22e8c2269464 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.DwD 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 115652390a66919c010a5c07414a7f0b993b22e8c2269464 0 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 115652390a66919c010a5c07414a7f0b993b22e8c2269464 0 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=115652390a66919c010a5c07414a7f0b993b22e8c2269464 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.DwD 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.DwD 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.DwD 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=b561e3f4c4ffaa91a0a495ff4c2972a7428873eb9cabee63c5309b7da1509acf 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.nL6 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key b561e3f4c4ffaa91a0a495ff4c2972a7428873eb9cabee63c5309b7da1509acf 3 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 b561e3f4c4ffaa91a0a495ff4c2972a7428873eb9cabee63c5309b7da1509acf 3 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:09:48.722 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=b561e3f4c4ffaa91a0a495ff4c2972a7428873eb9cabee63c5309b7da1509acf 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.nL6 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.nL6 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.nL6 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=ebc07b51f8ce83ce3beda20b6b9dfcfa 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.0o9 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key ebc07b51f8ce83ce3beda20b6b9dfcfa 1 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 ebc07b51f8ce83ce3beda20b6b9dfcfa 1 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=ebc07b51f8ce83ce3beda20b6b9dfcfa 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.0o9 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.0o9 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.0o9 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=b1bb36e1f6d4df7c932506a436a4f1e89897372a4e4da562 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.Tvt 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key b1bb36e1f6d4df7c932506a436a4f1e89897372a4e4da562 2 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 b1bb36e1f6d4df7c932506a436a4f1e89897372a4e4da562 2 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=b1bb36e1f6d4df7c932506a436a4f1e89897372a4e4da562 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:09:48.723 03:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.Tvt 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.Tvt 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Tvt 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=e240f349121db04c0f77f4e98c8ecd51dd8c8add08aee7ef 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.NtK 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key e240f349121db04c0f77f4e98c8ecd51dd8c8add08aee7ef 2 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 e240f349121db04c0f77f4e98c8ecd51dd8c8add08aee7ef 2 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=e240f349121db04c0f77f4e98c8ecd51dd8c8add08aee7ef 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.NtK 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.NtK 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.NtK 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=19e7df87002ca7a7fd5478e9f7059eb7 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.VHD 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 19e7df87002ca7a7fd5478e9f7059eb7 1 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 19e7df87002ca7a7fd5478e9f7059eb7 1 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=19e7df87002ca7a7fd5478e9f7059eb7 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.VHD 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.VHD 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.VHD 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=4bb18c528260f9df99c0bd957d568236813b6fbdc100422fc179fe602301f711 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.6IO 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 4bb18c528260f9df99c0bd957d568236813b6fbdc100422fc179fe602301f711 3 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 4bb18c528260f9df99c0bd957d568236813b6fbdc100422fc179fe602301f711 3 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=4bb18c528260f9df99c0bd957d568236813b6fbdc100422fc179fe602301f711 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.6IO 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.6IO 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.6IO 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67400 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 67400 ']' 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:48.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:48.983 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.551 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:49.551 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:09:49.551 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67432 /var/tmp/host.sock 00:09:49.551 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 67432 ']' 00:09:49.551 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:09:49.551 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:49.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:49.551 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:49.551 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:49.551 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.810 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:49.810 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:09:49.810 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:09:49.810 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.810 03:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.810 03:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.810 03:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:49.810 03:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.DwD 00:09:49.810 03:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.810 03:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.810 03:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.810 03:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.DwD 00:09:49.810 03:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.DwD 00:09:50.069 03:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.nL6 ]] 00:09:50.069 03:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.nL6 00:09:50.069 03:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.069 03:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.069 03:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.069 03:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.nL6 00:09:50.070 03:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.nL6 00:09:50.328 03:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:50.328 03:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.0o9 00:09:50.328 03:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.328 03:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.328 03:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.328 03:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.0o9 00:09:50.328 03:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.0o9 00:09:50.587 03:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Tvt ]] 00:09:50.587 03:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Tvt 00:09:50.587 03:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.587 03:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.587 03:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.587 03:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Tvt 00:09:50.587 03:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Tvt 00:09:50.846 03:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:50.846 03:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.NtK 00:09:50.846 03:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.846 03:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.105 03:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.105 03:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.NtK 00:09:51.105 03:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.NtK 00:09:51.105 03:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.VHD ]] 00:09:51.106 03:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.VHD 00:09:51.106 03:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.106 03:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.364 03:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.364 03:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.VHD 00:09:51.364 03:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.VHD 00:09:51.364 03:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:51.364 03:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.6IO 00:09:51.364 03:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.364 03:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.364 03:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.364 03:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.6IO 00:09:51.364 03:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.6IO 00:09:51.623 03:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:09:51.623 03:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:09:51.623 03:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:51.623 03:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:51.623 03:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:51.623 03:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:51.882 03:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:09:51.882 03:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:51.882 03:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:51.882 03:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:51.882 03:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:51.882 03:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:51.882 03:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:51.882 03:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.882 03:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.882 03:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.882 03:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:51.882 03:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:51.882 03:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:52.450 00:09:52.450 03:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:52.450 03:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:52.450 03:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:52.450 03:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:52.450 03:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:52.450 03:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.450 03:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.451 03:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.451 03:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:52.451 { 00:09:52.451 "cntlid": 1, 00:09:52.451 "qid": 0, 00:09:52.451 "state": "enabled", 00:09:52.451 "thread": "nvmf_tgt_poll_group_000", 00:09:52.451 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:09:52.451 "listen_address": { 00:09:52.451 "trtype": "TCP", 00:09:52.451 "adrfam": "IPv4", 00:09:52.451 "traddr": "10.0.0.3", 00:09:52.451 "trsvcid": "4420" 00:09:52.451 }, 00:09:52.451 "peer_address": { 00:09:52.451 "trtype": "TCP", 00:09:52.451 "adrfam": "IPv4", 00:09:52.451 "traddr": "10.0.0.1", 00:09:52.451 "trsvcid": "42790" 00:09:52.451 }, 00:09:52.451 "auth": { 00:09:52.451 "state": "completed", 00:09:52.451 "digest": "sha256", 00:09:52.451 "dhgroup": "null" 00:09:52.451 } 00:09:52.451 } 00:09:52.451 ]' 00:09:52.451 03:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:52.709 03:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:52.709 03:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:52.709 03:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:52.709 03:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:52.709 03:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:52.709 03:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:52.709 03:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:52.968 03:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:09:52.968 03:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:09:57.158 03:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:57.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:57.158 03:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:09:57.158 03:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.158 03:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.158 03:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.158 03:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:57.158 03:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:57.158 03:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:57.417 03:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:09:57.417 03:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:57.417 03:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:57.417 03:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:57.417 03:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:57.417 03:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:57.417 03:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:57.417 03:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.417 03:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.417 03:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.417 03:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:57.417 03:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:57.417 03:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:57.676 00:09:57.676 03:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:57.676 03:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:57.676 03:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:57.935 03:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:57.935 03:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:57.935 03:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.935 03:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.935 03:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.935 03:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:57.935 { 00:09:57.935 "cntlid": 3, 00:09:57.935 "qid": 0, 00:09:57.935 "state": "enabled", 00:09:57.935 "thread": "nvmf_tgt_poll_group_000", 00:09:57.935 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:09:57.935 "listen_address": { 00:09:57.935 "trtype": "TCP", 00:09:57.935 "adrfam": "IPv4", 00:09:57.935 "traddr": "10.0.0.3", 00:09:57.935 "trsvcid": "4420" 00:09:57.935 }, 00:09:57.935 "peer_address": { 00:09:57.935 "trtype": "TCP", 00:09:57.935 "adrfam": "IPv4", 00:09:57.935 "traddr": "10.0.0.1", 00:09:57.935 "trsvcid": "42822" 00:09:57.935 }, 00:09:57.935 "auth": { 00:09:57.935 "state": "completed", 00:09:57.935 "digest": "sha256", 00:09:57.935 "dhgroup": "null" 00:09:57.935 } 00:09:57.935 } 00:09:57.935 ]' 00:09:57.935 03:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:57.935 03:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:57.935 03:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:58.194 03:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:58.194 03:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:58.194 03:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:58.194 03:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:58.194 03:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:58.453 03:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:09:58.453 03:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:09:59.020 03:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:59.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:59.020 03:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:09:59.020 03:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.020 03:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.020 03:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.020 03:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:59.020 03:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:59.020 03:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:59.279 03:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:09:59.279 03:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:59.279 03:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:59.279 03:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:59.279 03:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:59.279 03:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:59.279 03:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:59.279 03:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.279 03:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.279 03:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.279 03:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:59.279 03:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:59.279 03:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:59.848 00:09:59.848 03:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:59.848 03:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:59.848 03:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:59.848 03:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:00.107 03:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:00.107 03:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.107 03:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.107 03:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.107 03:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:00.107 { 00:10:00.107 "cntlid": 5, 00:10:00.108 "qid": 0, 00:10:00.108 "state": "enabled", 00:10:00.108 "thread": "nvmf_tgt_poll_group_000", 00:10:00.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:10:00.108 "listen_address": { 00:10:00.108 "trtype": "TCP", 00:10:00.108 "adrfam": "IPv4", 00:10:00.108 "traddr": "10.0.0.3", 00:10:00.108 "trsvcid": "4420" 00:10:00.108 }, 00:10:00.108 "peer_address": { 00:10:00.108 "trtype": "TCP", 00:10:00.108 "adrfam": "IPv4", 00:10:00.108 "traddr": "10.0.0.1", 00:10:00.108 "trsvcid": "44164" 00:10:00.108 }, 00:10:00.108 "auth": { 00:10:00.108 "state": "completed", 00:10:00.108 "digest": "sha256", 00:10:00.108 "dhgroup": "null" 00:10:00.108 } 00:10:00.108 } 00:10:00.108 ]' 00:10:00.108 03:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:00.108 03:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:00.108 03:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:00.108 03:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:00.108 03:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:00.108 03:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:00.108 03:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:00.108 03:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:00.367 03:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:10:00.367 03:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:10:00.935 03:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:00.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:00.935 03:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:10:00.935 03:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.935 03:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.935 03:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.935 03:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:00.935 03:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:00.935 03:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:01.194 03:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:10:01.194 03:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:01.194 03:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:01.194 03:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:01.194 03:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:01.194 03:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:01.194 03:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key3 00:10:01.194 03:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.194 03:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.194 03:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.194 03:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:01.194 03:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:01.194 03:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:01.763 00:10:01.763 03:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:01.763 03:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:01.763 03:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:01.763 03:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:01.763 03:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:01.763 03:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.763 03:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.763 03:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.763 03:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:01.763 { 00:10:01.763 "cntlid": 7, 00:10:01.763 "qid": 0, 00:10:01.763 "state": "enabled", 00:10:01.763 "thread": "nvmf_tgt_poll_group_000", 00:10:01.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:10:01.763 "listen_address": { 00:10:01.763 "trtype": "TCP", 00:10:01.763 "adrfam": "IPv4", 00:10:01.763 "traddr": "10.0.0.3", 00:10:01.763 "trsvcid": "4420" 00:10:01.763 }, 00:10:01.763 "peer_address": { 00:10:01.763 "trtype": "TCP", 00:10:01.763 "adrfam": "IPv4", 00:10:01.763 "traddr": "10.0.0.1", 00:10:01.763 "trsvcid": "44192" 00:10:01.763 }, 00:10:01.763 "auth": { 00:10:01.763 "state": "completed", 00:10:01.763 "digest": "sha256", 00:10:01.763 "dhgroup": "null" 00:10:01.763 } 00:10:01.763 } 00:10:01.763 ]' 00:10:01.763 03:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:02.022 03:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:02.022 03:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:02.022 03:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:02.022 03:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:02.022 03:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:02.022 03:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:02.022 03:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:02.284 03:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:10:02.284 03:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:10:02.870 03:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:02.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:02.870 03:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:10:02.870 03:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.870 03:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.870 03:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.870 03:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:02.870 03:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:02.870 03:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:02.870 03:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:03.129 03:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:10:03.129 03:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:03.129 03:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:03.129 03:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:03.129 03:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:03.129 03:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:03.129 03:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:03.129 03:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.129 03:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.129 03:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.129 03:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:03.129 03:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:03.129 03:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:03.697 00:10:03.697 03:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:03.697 03:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:03.697 03:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:03.957 03:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:03.957 03:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:03.957 03:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.957 03:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.957 03:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.957 03:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:03.957 { 00:10:03.957 "cntlid": 9, 00:10:03.957 "qid": 0, 00:10:03.957 "state": "enabled", 00:10:03.957 "thread": "nvmf_tgt_poll_group_000", 00:10:03.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:10:03.957 "listen_address": { 00:10:03.957 "trtype": "TCP", 00:10:03.957 "adrfam": "IPv4", 00:10:03.957 "traddr": "10.0.0.3", 00:10:03.957 "trsvcid": "4420" 00:10:03.957 }, 00:10:03.957 "peer_address": { 00:10:03.957 "trtype": "TCP", 00:10:03.957 "adrfam": "IPv4", 00:10:03.957 "traddr": "10.0.0.1", 00:10:03.957 "trsvcid": "44208" 00:10:03.957 }, 00:10:03.957 "auth": { 00:10:03.957 "state": "completed", 00:10:03.957 "digest": "sha256", 00:10:03.957 "dhgroup": "ffdhe2048" 00:10:03.957 } 00:10:03.957 } 00:10:03.957 ]' 00:10:03.957 03:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:03.957 03:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:03.957 03:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:03.957 03:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:03.957 03:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:04.216 03:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:04.216 03:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:04.216 03:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:04.475 03:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:10:04.475 03:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:10:05.043 03:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:05.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:05.044 03:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:10:05.044 03:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.044 03:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.044 03:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.044 03:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:05.044 03:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:05.044 03:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:05.303 03:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:10:05.303 03:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:05.303 03:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:05.303 03:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:05.303 03:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:05.303 03:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:05.303 03:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:05.303 03:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.303 03:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.303 03:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.303 03:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:05.303 03:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:05.303 03:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:05.561 00:10:05.561 03:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:05.561 03:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:05.561 03:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:05.821 03:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:05.821 03:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:05.821 03:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.821 03:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.821 03:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.821 03:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:05.821 { 00:10:05.821 "cntlid": 11, 00:10:05.821 "qid": 0, 00:10:05.821 "state": "enabled", 00:10:05.821 "thread": "nvmf_tgt_poll_group_000", 00:10:05.821 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:10:05.821 "listen_address": { 00:10:05.821 "trtype": "TCP", 00:10:05.821 "adrfam": "IPv4", 00:10:05.821 "traddr": "10.0.0.3", 00:10:05.821 "trsvcid": "4420" 00:10:05.821 }, 00:10:05.821 "peer_address": { 00:10:05.821 "trtype": "TCP", 00:10:05.821 "adrfam": "IPv4", 00:10:05.821 "traddr": "10.0.0.1", 00:10:05.821 "trsvcid": "44232" 00:10:05.821 }, 00:10:05.821 "auth": { 00:10:05.821 "state": "completed", 00:10:05.821 "digest": "sha256", 00:10:05.821 "dhgroup": "ffdhe2048" 00:10:05.821 } 00:10:05.821 } 00:10:05.821 ]' 00:10:05.821 03:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:06.080 03:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:06.080 03:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:06.080 03:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:06.080 03:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:06.080 03:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:06.080 03:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:06.080 03:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:06.340 03:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:10:06.340 03:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:10:07.277 03:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:07.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:07.277 03:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:10:07.277 03:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.277 03:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.277 03:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.277 03:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:07.277 03:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:07.277 03:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:07.535 03:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:10:07.535 03:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:07.535 03:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:07.535 03:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:07.535 03:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:07.535 03:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:07.535 03:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:07.535 03:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.535 03:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.535 03:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.535 03:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:07.535 03:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:07.535 03:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:07.793 00:10:07.793 03:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:07.793 03:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:07.793 03:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:08.050 03:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:08.050 03:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:08.050 03:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.050 03:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.050 03:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.050 03:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:08.050 { 00:10:08.050 "cntlid": 13, 00:10:08.050 "qid": 0, 00:10:08.050 "state": "enabled", 00:10:08.050 "thread": "nvmf_tgt_poll_group_000", 00:10:08.050 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:10:08.050 "listen_address": { 00:10:08.050 "trtype": "TCP", 00:10:08.050 "adrfam": "IPv4", 00:10:08.050 "traddr": "10.0.0.3", 00:10:08.050 "trsvcid": "4420" 00:10:08.050 }, 00:10:08.050 "peer_address": { 00:10:08.050 "trtype": "TCP", 00:10:08.050 "adrfam": "IPv4", 00:10:08.050 "traddr": "10.0.0.1", 00:10:08.050 "trsvcid": "44256" 00:10:08.050 }, 00:10:08.050 "auth": { 00:10:08.050 "state": "completed", 00:10:08.050 "digest": "sha256", 00:10:08.050 "dhgroup": "ffdhe2048" 00:10:08.050 } 00:10:08.050 } 00:10:08.050 ]' 00:10:08.050 03:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:08.307 03:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:08.307 03:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:08.307 03:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:08.307 03:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:08.307 03:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:08.307 03:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:08.307 03:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:08.565 03:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:10:08.565 03:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:10:09.131 03:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:09.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:09.131 03:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:10:09.131 03:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.131 03:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.131 03:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.131 03:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:09.131 03:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:09.131 03:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:09.389 03:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:10:09.389 03:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:09.389 03:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:09.389 03:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:09.389 03:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:09.389 03:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:09.389 03:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key3 00:10:09.389 03:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.389 03:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.389 03:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.389 03:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:09.390 03:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:09.390 03:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:09.968 00:10:09.968 03:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:09.968 03:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:09.968 03:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:09.968 03:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:09.968 03:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:09.968 03:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.968 03:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.238 03:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.238 03:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:10.238 { 00:10:10.238 "cntlid": 15, 00:10:10.238 "qid": 0, 00:10:10.238 "state": "enabled", 00:10:10.238 "thread": "nvmf_tgt_poll_group_000", 00:10:10.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:10:10.238 "listen_address": { 00:10:10.238 "trtype": "TCP", 00:10:10.238 "adrfam": "IPv4", 00:10:10.238 "traddr": "10.0.0.3", 00:10:10.238 "trsvcid": "4420" 00:10:10.238 }, 00:10:10.238 "peer_address": { 00:10:10.238 "trtype": "TCP", 00:10:10.238 "adrfam": "IPv4", 00:10:10.238 "traddr": "10.0.0.1", 00:10:10.238 "trsvcid": "43172" 00:10:10.238 }, 00:10:10.238 "auth": { 00:10:10.238 "state": "completed", 00:10:10.238 "digest": "sha256", 00:10:10.238 "dhgroup": "ffdhe2048" 00:10:10.238 } 00:10:10.238 } 00:10:10.238 ]' 00:10:10.238 03:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:10.238 03:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:10.238 03:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:10.238 03:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:10.238 03:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:10.239 03:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:10.239 03:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:10.239 03:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:10.497 03:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:10:10.497 03:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:10:11.065 03:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:11.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:11.065 03:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:10:11.065 03:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.065 03:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.065 03:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.065 03:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:11.065 03:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:11.065 03:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:11.065 03:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:11.324 03:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:10:11.324 03:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:11.324 03:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:11.324 03:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:11.324 03:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:11.324 03:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:11.324 03:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:11.324 03:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.324 03:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.324 03:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.324 03:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:11.324 03:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:11.324 03:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:11.582 00:10:11.582 03:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:11.583 03:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:11.583 03:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:11.841 03:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:11.841 03:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:11.841 03:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.841 03:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.841 03:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.841 03:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:11.841 { 00:10:11.841 "cntlid": 17, 00:10:11.841 "qid": 0, 00:10:11.841 "state": "enabled", 00:10:11.841 "thread": "nvmf_tgt_poll_group_000", 00:10:11.841 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:10:11.841 "listen_address": { 00:10:11.841 "trtype": "TCP", 00:10:11.841 "adrfam": "IPv4", 00:10:11.841 "traddr": "10.0.0.3", 00:10:11.841 "trsvcid": "4420" 00:10:11.841 }, 00:10:11.841 "peer_address": { 00:10:11.841 "trtype": "TCP", 00:10:11.841 "adrfam": "IPv4", 00:10:11.841 "traddr": "10.0.0.1", 00:10:11.841 "trsvcid": "43190" 00:10:11.841 }, 00:10:11.841 "auth": { 00:10:11.841 "state": "completed", 00:10:11.841 "digest": "sha256", 00:10:11.841 "dhgroup": "ffdhe3072" 00:10:11.841 } 00:10:11.841 } 00:10:11.841 ]' 00:10:11.841 03:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:12.100 03:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:12.100 03:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:12.100 03:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:12.100 03:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:12.100 03:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:12.100 03:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:12.100 03:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:12.359 03:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:10:12.359 03:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:10:12.925 03:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:12.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:12.925 03:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:10:12.925 03:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.925 03:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.925 03:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.925 03:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:12.925 03:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:12.925 03:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:13.183 03:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:10:13.183 03:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:13.183 03:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:13.183 03:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:13.183 03:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:13.183 03:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:13.183 03:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:13.183 03:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.184 03:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.184 03:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.184 03:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:13.184 03:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:13.184 03:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:13.442 00:10:13.442 03:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:13.442 03:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:13.442 03:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:14.009 03:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:14.009 03:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:14.009 03:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.009 03:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.009 03:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.009 03:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:14.009 { 00:10:14.009 "cntlid": 19, 00:10:14.009 "qid": 0, 00:10:14.009 "state": "enabled", 00:10:14.009 "thread": "nvmf_tgt_poll_group_000", 00:10:14.009 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:10:14.009 "listen_address": { 00:10:14.009 "trtype": "TCP", 00:10:14.009 "adrfam": "IPv4", 00:10:14.009 "traddr": "10.0.0.3", 00:10:14.009 "trsvcid": "4420" 00:10:14.009 }, 00:10:14.009 "peer_address": { 00:10:14.009 "trtype": "TCP", 00:10:14.009 "adrfam": "IPv4", 00:10:14.009 "traddr": "10.0.0.1", 00:10:14.009 "trsvcid": "43198" 00:10:14.009 }, 00:10:14.009 "auth": { 00:10:14.009 "state": "completed", 00:10:14.009 "digest": "sha256", 00:10:14.009 "dhgroup": "ffdhe3072" 00:10:14.009 } 00:10:14.009 } 00:10:14.009 ]' 00:10:14.009 03:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:14.009 03:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:14.009 03:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:14.009 03:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:14.009 03:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:14.009 03:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:14.009 03:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:14.009 03:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:14.268 03:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:10:14.268 03:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:10:14.835 03:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:14.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:14.835 03:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:10:14.835 03:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.835 03:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.835 03:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.835 03:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:14.835 03:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:14.835 03:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:15.094 03:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:10:15.094 03:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:15.094 03:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:15.094 03:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:15.094 03:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:15.094 03:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:15.094 03:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:15.094 03:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.094 03:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.353 03:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.353 03:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:15.353 03:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:15.353 03:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:15.611 00:10:15.611 03:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:15.611 03:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:15.611 03:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:15.869 03:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:15.869 03:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:15.869 03:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.869 03:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.869 03:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.869 03:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:15.869 { 00:10:15.869 "cntlid": 21, 00:10:15.869 "qid": 0, 00:10:15.869 "state": "enabled", 00:10:15.869 "thread": "nvmf_tgt_poll_group_000", 00:10:15.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:10:15.869 "listen_address": { 00:10:15.869 "trtype": "TCP", 00:10:15.869 "adrfam": "IPv4", 00:10:15.869 "traddr": "10.0.0.3", 00:10:15.869 "trsvcid": "4420" 00:10:15.869 }, 00:10:15.869 "peer_address": { 00:10:15.869 "trtype": "TCP", 00:10:15.869 "adrfam": "IPv4", 00:10:15.869 "traddr": "10.0.0.1", 00:10:15.869 "trsvcid": "43226" 00:10:15.869 }, 00:10:15.869 "auth": { 00:10:15.869 "state": "completed", 00:10:15.869 "digest": "sha256", 00:10:15.869 "dhgroup": "ffdhe3072" 00:10:15.869 } 00:10:15.869 } 00:10:15.869 ]' 00:10:15.869 03:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:16.128 03:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:16.128 03:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:16.128 03:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:16.128 03:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:16.128 03:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:16.128 03:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:16.128 03:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:16.386 03:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:10:16.387 03:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:10:16.953 03:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:16.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:16.953 03:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:10:16.953 03:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.953 03:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.953 03:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.953 03:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:16.953 03:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:16.953 03:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:17.559 03:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:10:17.559 03:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:17.559 03:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:17.559 03:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:17.559 03:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:17.560 03:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:17.560 03:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key3 00:10:17.560 03:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.560 03:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.560 03:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.560 03:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:17.560 03:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:17.560 03:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:17.819 00:10:17.819 03:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:17.819 03:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:17.819 03:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:18.078 03:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:18.078 03:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:18.078 03:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.078 03:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.078 03:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.078 03:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:18.078 { 00:10:18.078 "cntlid": 23, 00:10:18.078 "qid": 0, 00:10:18.078 "state": "enabled", 00:10:18.078 "thread": "nvmf_tgt_poll_group_000", 00:10:18.078 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:10:18.078 "listen_address": { 00:10:18.078 "trtype": "TCP", 00:10:18.078 "adrfam": "IPv4", 00:10:18.078 "traddr": "10.0.0.3", 00:10:18.078 "trsvcid": "4420" 00:10:18.078 }, 00:10:18.078 "peer_address": { 00:10:18.078 "trtype": "TCP", 00:10:18.078 "adrfam": "IPv4", 00:10:18.078 "traddr": "10.0.0.1", 00:10:18.078 "trsvcid": "43250" 00:10:18.078 }, 00:10:18.078 "auth": { 00:10:18.078 "state": "completed", 00:10:18.078 "digest": "sha256", 00:10:18.078 "dhgroup": "ffdhe3072" 00:10:18.078 } 00:10:18.078 } 00:10:18.078 ]' 00:10:18.078 03:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:18.078 03:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:18.078 03:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:18.078 03:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:18.078 03:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:18.078 03:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:18.078 03:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:18.078 03:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:18.337 03:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:10:18.337 03:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:10:18.904 03:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:19.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:19.163 03:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:10:19.163 03:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.163 03:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.163 03:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.163 03:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:19.163 03:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:19.163 03:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:19.163 03:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:19.422 03:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:10:19.422 03:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:19.422 03:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:19.422 03:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:19.422 03:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:19.422 03:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:19.422 03:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:19.422 03:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.422 03:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.422 03:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.422 03:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:19.422 03:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:19.422 03:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:19.989 00:10:19.989 03:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:19.989 03:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:19.989 03:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:20.248 03:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:20.248 03:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:20.248 03:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.248 03:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.248 03:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.248 03:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:20.248 { 00:10:20.248 "cntlid": 25, 00:10:20.248 "qid": 0, 00:10:20.248 "state": "enabled", 00:10:20.248 "thread": "nvmf_tgt_poll_group_000", 00:10:20.248 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:10:20.248 "listen_address": { 00:10:20.248 "trtype": "TCP", 00:10:20.248 "adrfam": "IPv4", 00:10:20.248 "traddr": "10.0.0.3", 00:10:20.248 "trsvcid": "4420" 00:10:20.248 }, 00:10:20.248 "peer_address": { 00:10:20.248 "trtype": "TCP", 00:10:20.248 "adrfam": "IPv4", 00:10:20.248 "traddr": "10.0.0.1", 00:10:20.248 "trsvcid": "50490" 00:10:20.248 }, 00:10:20.248 "auth": { 00:10:20.248 "state": "completed", 00:10:20.248 "digest": "sha256", 00:10:20.248 "dhgroup": "ffdhe4096" 00:10:20.248 } 00:10:20.248 } 00:10:20.248 ]' 00:10:20.248 03:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:20.248 03:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:20.248 03:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:20.248 03:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:20.248 03:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:20.507 03:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:20.507 03:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:20.507 03:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:20.766 03:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:10:20.766 03:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:10:21.334 03:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:21.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:21.334 03:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:10:21.334 03:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.334 03:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.334 03:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.334 03:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:21.334 03:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:21.334 03:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:21.592 03:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:10:21.592 03:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:21.592 03:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:21.592 03:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:21.592 03:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:21.592 03:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:21.592 03:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:21.592 03:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.592 03:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.592 03:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.592 03:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:21.592 03:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:21.592 03:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:22.160 00:10:22.160 03:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:22.160 03:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:22.160 03:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:22.419 03:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:22.419 03:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:22.419 03:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.419 03:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.419 03:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.419 03:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:22.419 { 00:10:22.419 "cntlid": 27, 00:10:22.419 "qid": 0, 00:10:22.419 "state": "enabled", 00:10:22.419 "thread": "nvmf_tgt_poll_group_000", 00:10:22.419 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:10:22.419 "listen_address": { 00:10:22.419 "trtype": "TCP", 00:10:22.419 "adrfam": "IPv4", 00:10:22.419 "traddr": "10.0.0.3", 00:10:22.419 "trsvcid": "4420" 00:10:22.419 }, 00:10:22.419 "peer_address": { 00:10:22.419 "trtype": "TCP", 00:10:22.419 "adrfam": "IPv4", 00:10:22.419 "traddr": "10.0.0.1", 00:10:22.419 "trsvcid": "50526" 00:10:22.419 }, 00:10:22.419 "auth": { 00:10:22.419 "state": "completed", 00:10:22.419 "digest": "sha256", 00:10:22.419 "dhgroup": "ffdhe4096" 00:10:22.419 } 00:10:22.419 } 00:10:22.419 ]' 00:10:22.419 03:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:22.419 03:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:22.419 03:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:22.419 03:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:22.419 03:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:22.419 03:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:22.419 03:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:22.419 03:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:22.678 03:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:10:22.678 03:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:10:23.245 03:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:23.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:23.245 03:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:10:23.245 03:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.245 03:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.504 03:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.504 03:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:23.504 03:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:23.504 03:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:23.504 03:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:10:23.504 03:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:23.504 03:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:23.504 03:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:23.504 03:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:23.504 03:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:23.504 03:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:23.504 03:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.504 03:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.763 03:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.763 03:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:23.763 03:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:23.763 03:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:24.021 00:10:24.021 03:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:24.021 03:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:24.021 03:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:24.280 03:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:24.280 03:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:24.280 03:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.280 03:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.280 03:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.280 03:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:24.280 { 00:10:24.280 "cntlid": 29, 00:10:24.280 "qid": 0, 00:10:24.280 "state": "enabled", 00:10:24.280 "thread": "nvmf_tgt_poll_group_000", 00:10:24.280 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:10:24.280 "listen_address": { 00:10:24.280 "trtype": "TCP", 00:10:24.280 "adrfam": "IPv4", 00:10:24.280 "traddr": "10.0.0.3", 00:10:24.280 "trsvcid": "4420" 00:10:24.280 }, 00:10:24.280 "peer_address": { 00:10:24.280 "trtype": "TCP", 00:10:24.280 "adrfam": "IPv4", 00:10:24.280 "traddr": "10.0.0.1", 00:10:24.280 "trsvcid": "50542" 00:10:24.280 }, 00:10:24.280 "auth": { 00:10:24.280 "state": "completed", 00:10:24.280 "digest": "sha256", 00:10:24.280 "dhgroup": "ffdhe4096" 00:10:24.280 } 00:10:24.280 } 00:10:24.280 ]' 00:10:24.280 03:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:24.280 03:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:24.280 03:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:24.612 03:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:24.612 03:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:24.612 03:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:24.612 03:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:24.612 03:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:24.870 03:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:10:24.870 03:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:10:25.438 03:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:25.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:25.438 03:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:10:25.438 03:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.438 03:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.438 03:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.438 03:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:25.438 03:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:25.438 03:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:25.696 03:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:10:25.696 03:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:25.696 03:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:25.696 03:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:25.696 03:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:25.696 03:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:25.696 03:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key3 00:10:25.696 03:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.696 03:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.696 03:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.696 03:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:25.696 03:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:25.696 03:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:26.303 00:10:26.303 03:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:26.303 03:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:26.303 03:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:26.303 03:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:26.303 03:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:26.303 03:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.303 03:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.303 03:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.303 03:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:26.303 { 00:10:26.303 "cntlid": 31, 00:10:26.303 "qid": 0, 00:10:26.303 "state": "enabled", 00:10:26.303 "thread": "nvmf_tgt_poll_group_000", 00:10:26.303 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:10:26.303 "listen_address": { 00:10:26.303 "trtype": "TCP", 00:10:26.303 "adrfam": "IPv4", 00:10:26.303 "traddr": "10.0.0.3", 00:10:26.303 "trsvcid": "4420" 00:10:26.303 }, 00:10:26.303 "peer_address": { 00:10:26.303 "trtype": "TCP", 00:10:26.303 "adrfam": "IPv4", 00:10:26.303 "traddr": "10.0.0.1", 00:10:26.303 "trsvcid": "50554" 00:10:26.303 }, 00:10:26.303 "auth": { 00:10:26.303 "state": "completed", 00:10:26.303 "digest": "sha256", 00:10:26.303 "dhgroup": "ffdhe4096" 00:10:26.303 } 00:10:26.303 } 00:10:26.303 ]' 00:10:26.303 03:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:26.562 03:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:26.562 03:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:26.562 03:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:26.562 03:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:26.562 03:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:26.562 03:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:26.562 03:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:26.821 03:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:10:26.821 03:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:10:27.388 03:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:27.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:27.647 03:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:10:27.647 03:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.647 03:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.647 03:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.647 03:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:27.647 03:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:27.647 03:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:27.647 03:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:27.647 03:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:10:27.647 03:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:27.647 03:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:27.647 03:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:27.647 03:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:27.648 03:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:27.648 03:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:27.648 03:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.648 03:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.648 03:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.648 03:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:27.648 03:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:27.648 03:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:28.216 00:10:28.216 03:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:28.216 03:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:28.216 03:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:28.475 03:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:28.475 03:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:28.475 03:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.475 03:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.475 03:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.475 03:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:28.475 { 00:10:28.475 "cntlid": 33, 00:10:28.475 "qid": 0, 00:10:28.475 "state": "enabled", 00:10:28.475 "thread": "nvmf_tgt_poll_group_000", 00:10:28.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:10:28.475 "listen_address": { 00:10:28.475 "trtype": "TCP", 00:10:28.475 "adrfam": "IPv4", 00:10:28.475 "traddr": "10.0.0.3", 00:10:28.475 "trsvcid": "4420" 00:10:28.475 }, 00:10:28.475 "peer_address": { 00:10:28.475 "trtype": "TCP", 00:10:28.475 "adrfam": "IPv4", 00:10:28.475 "traddr": "10.0.0.1", 00:10:28.475 "trsvcid": "50584" 00:10:28.475 }, 00:10:28.475 "auth": { 00:10:28.475 "state": "completed", 00:10:28.475 "digest": "sha256", 00:10:28.475 "dhgroup": "ffdhe6144" 00:10:28.475 } 00:10:28.475 } 00:10:28.475 ]' 00:10:28.475 03:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:28.734 03:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:28.734 03:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:28.734 03:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:28.734 03:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:28.734 03:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:28.734 03:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:28.734 03:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:28.993 03:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:10:28.993 03:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:10:29.560 03:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:29.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:29.560 03:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:10:29.560 03:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.560 03:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.560 03:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.560 03:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:29.560 03:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:29.560 03:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:29.819 03:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:10:29.819 03:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:29.819 03:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:29.819 03:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:29.819 03:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:29.819 03:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:29.819 03:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:29.819 03:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.819 03:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.819 03:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.819 03:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:29.819 03:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:29.819 03:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:30.387 00:10:30.387 03:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:30.387 03:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:30.387 03:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:30.645 03:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:30.646 03:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:30.646 03:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.646 03:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.646 03:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.646 03:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:30.646 { 00:10:30.646 "cntlid": 35, 00:10:30.646 "qid": 0, 00:10:30.646 "state": "enabled", 00:10:30.646 "thread": "nvmf_tgt_poll_group_000", 00:10:30.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:10:30.646 "listen_address": { 00:10:30.646 "trtype": "TCP", 00:10:30.646 "adrfam": "IPv4", 00:10:30.646 "traddr": "10.0.0.3", 00:10:30.646 "trsvcid": "4420" 00:10:30.646 }, 00:10:30.646 "peer_address": { 00:10:30.646 "trtype": "TCP", 00:10:30.646 "adrfam": "IPv4", 00:10:30.646 "traddr": "10.0.0.1", 00:10:30.646 "trsvcid": "43526" 00:10:30.646 }, 00:10:30.646 "auth": { 00:10:30.646 "state": "completed", 00:10:30.646 "digest": "sha256", 00:10:30.646 "dhgroup": "ffdhe6144" 00:10:30.646 } 00:10:30.646 } 00:10:30.646 ]' 00:10:30.646 03:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:30.646 03:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:30.646 03:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:30.646 03:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:30.646 03:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:30.646 03:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:30.646 03:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:30.646 03:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:31.214 03:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:10:31.214 03:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:10:31.812 03:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:31.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:31.812 03:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:10:31.812 03:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.812 03:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.812 03:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.812 03:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:31.812 03:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:31.812 03:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:32.083 03:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:10:32.083 03:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:32.083 03:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:32.083 03:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:32.083 03:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:32.083 03:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:32.083 03:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:32.083 03:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.083 03:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.083 03:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.083 03:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:32.083 03:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:32.083 03:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:32.343 00:10:32.343 03:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:32.343 03:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:32.343 03:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:32.910 03:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:32.910 03:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:32.910 03:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.910 03:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.910 03:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.910 03:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:32.910 { 00:10:32.910 "cntlid": 37, 00:10:32.910 "qid": 0, 00:10:32.910 "state": "enabled", 00:10:32.910 "thread": "nvmf_tgt_poll_group_000", 00:10:32.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:10:32.910 "listen_address": { 00:10:32.910 "trtype": "TCP", 00:10:32.910 "adrfam": "IPv4", 00:10:32.910 "traddr": "10.0.0.3", 00:10:32.910 "trsvcid": "4420" 00:10:32.910 }, 00:10:32.910 "peer_address": { 00:10:32.910 "trtype": "TCP", 00:10:32.910 "adrfam": "IPv4", 00:10:32.910 "traddr": "10.0.0.1", 00:10:32.910 "trsvcid": "43540" 00:10:32.910 }, 00:10:32.910 "auth": { 00:10:32.910 "state": "completed", 00:10:32.910 "digest": "sha256", 00:10:32.910 "dhgroup": "ffdhe6144" 00:10:32.910 } 00:10:32.910 } 00:10:32.910 ]' 00:10:32.910 03:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:32.910 03:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:32.910 03:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:32.910 03:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:32.910 03:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:32.910 03:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:32.910 03:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:32.910 03:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:33.169 03:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:10:33.169 03:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:10:33.737 03:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:33.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:33.737 03:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:10:33.737 03:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.737 03:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.737 03:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.737 03:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:33.737 03:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:33.737 03:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:33.996 03:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:10:33.996 03:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:33.996 03:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:33.996 03:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:33.996 03:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:33.996 03:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:33.996 03:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key3 00:10:33.996 03:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.996 03:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.996 03:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.996 03:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:33.996 03:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:33.996 03:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:34.564 00:10:34.564 03:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:34.564 03:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:34.564 03:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:34.823 03:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:34.823 03:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:34.823 03:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.823 03:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.823 03:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.823 03:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:34.823 { 00:10:34.823 "cntlid": 39, 00:10:34.823 "qid": 0, 00:10:34.823 "state": "enabled", 00:10:34.823 "thread": "nvmf_tgt_poll_group_000", 00:10:34.823 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:10:34.823 "listen_address": { 00:10:34.823 "trtype": "TCP", 00:10:34.823 "adrfam": "IPv4", 00:10:34.823 "traddr": "10.0.0.3", 00:10:34.823 "trsvcid": "4420" 00:10:34.823 }, 00:10:34.823 "peer_address": { 00:10:34.823 "trtype": "TCP", 00:10:34.823 "adrfam": "IPv4", 00:10:34.823 "traddr": "10.0.0.1", 00:10:34.823 "trsvcid": "43574" 00:10:34.823 }, 00:10:34.823 "auth": { 00:10:34.823 "state": "completed", 00:10:34.823 "digest": "sha256", 00:10:34.823 "dhgroup": "ffdhe6144" 00:10:34.823 } 00:10:34.823 } 00:10:34.823 ]' 00:10:34.823 03:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:34.823 03:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:34.823 03:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:34.823 03:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:34.823 03:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:35.081 03:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:35.081 03:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:35.081 03:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:35.340 03:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:10:35.340 03:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:10:35.908 03:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:35.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:35.908 03:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:10:35.908 03:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.908 03:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.908 03:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.908 03:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:35.908 03:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:35.908 03:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:35.908 03:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:36.167 03:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:10:36.167 03:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:36.167 03:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:36.167 03:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:36.167 03:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:36.167 03:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:36.167 03:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:36.167 03:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.167 03:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.167 03:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.167 03:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:36.167 03:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:36.167 03:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:36.734 00:10:36.734 03:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:36.734 03:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:36.734 03:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:37.302 03:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:37.302 03:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:37.302 03:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.302 03:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.302 03:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.302 03:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:37.302 { 00:10:37.302 "cntlid": 41, 00:10:37.302 "qid": 0, 00:10:37.302 "state": "enabled", 00:10:37.302 "thread": "nvmf_tgt_poll_group_000", 00:10:37.302 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:10:37.302 "listen_address": { 00:10:37.302 "trtype": "TCP", 00:10:37.302 "adrfam": "IPv4", 00:10:37.302 "traddr": "10.0.0.3", 00:10:37.302 "trsvcid": "4420" 00:10:37.302 }, 00:10:37.302 "peer_address": { 00:10:37.302 "trtype": "TCP", 00:10:37.302 "adrfam": "IPv4", 00:10:37.302 "traddr": "10.0.0.1", 00:10:37.302 "trsvcid": "43600" 00:10:37.302 }, 00:10:37.302 "auth": { 00:10:37.302 "state": "completed", 00:10:37.302 "digest": "sha256", 00:10:37.302 "dhgroup": "ffdhe8192" 00:10:37.302 } 00:10:37.302 } 00:10:37.302 ]' 00:10:37.302 03:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:37.302 03:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:37.302 03:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:37.302 03:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:37.302 03:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:37.302 03:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:37.302 03:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:37.302 03:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:37.561 03:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:10:37.561 03:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:10:38.128 03:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:38.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:38.128 03:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:10:38.128 03:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.128 03:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.386 03:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.387 03:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:38.387 03:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:38.387 03:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:38.645 03:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:10:38.645 03:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:38.645 03:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:38.645 03:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:38.645 03:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:38.645 03:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:38.645 03:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:38.645 03:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.645 03:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.645 03:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.645 03:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:38.645 03:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:38.645 03:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:39.213 00:10:39.213 03:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:39.213 03:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:39.213 03:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:39.471 03:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:39.471 03:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:39.471 03:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.471 03:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.471 03:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.471 03:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:39.471 { 00:10:39.471 "cntlid": 43, 00:10:39.471 "qid": 0, 00:10:39.471 "state": "enabled", 00:10:39.472 "thread": "nvmf_tgt_poll_group_000", 00:10:39.472 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:10:39.472 "listen_address": { 00:10:39.472 "trtype": "TCP", 00:10:39.472 "adrfam": "IPv4", 00:10:39.472 "traddr": "10.0.0.3", 00:10:39.472 "trsvcid": "4420" 00:10:39.472 }, 00:10:39.472 "peer_address": { 00:10:39.472 "trtype": "TCP", 00:10:39.472 "adrfam": "IPv4", 00:10:39.472 "traddr": "10.0.0.1", 00:10:39.472 "trsvcid": "43622" 00:10:39.472 }, 00:10:39.472 "auth": { 00:10:39.472 "state": "completed", 00:10:39.472 "digest": "sha256", 00:10:39.472 "dhgroup": "ffdhe8192" 00:10:39.472 } 00:10:39.472 } 00:10:39.472 ]' 00:10:39.472 03:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:39.472 03:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:39.472 03:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:39.730 03:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:39.730 03:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:39.730 03:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:39.730 03:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:39.730 03:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:39.989 03:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:10:39.989 03:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:10:40.556 03:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:40.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:40.556 03:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:10:40.556 03:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.556 03:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.556 03:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.556 03:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:40.556 03:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:40.556 03:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:40.814 03:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:10:40.814 03:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:40.814 03:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:40.814 03:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:40.814 03:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:40.814 03:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:40.814 03:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.814 03:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.814 03:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.814 03:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.814 03:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.814 03:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.814 03:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:41.792 00:10:41.792 03:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:41.792 03:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:41.792 03:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:41.792 03:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:41.792 03:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:41.792 03:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.792 03:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.792 03:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.792 03:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:41.792 { 00:10:41.792 "cntlid": 45, 00:10:41.792 "qid": 0, 00:10:41.792 "state": "enabled", 00:10:41.792 "thread": "nvmf_tgt_poll_group_000", 00:10:41.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:10:41.792 "listen_address": { 00:10:41.792 "trtype": "TCP", 00:10:41.792 "adrfam": "IPv4", 00:10:41.792 "traddr": "10.0.0.3", 00:10:41.792 "trsvcid": "4420" 00:10:41.792 }, 00:10:41.792 "peer_address": { 00:10:41.792 "trtype": "TCP", 00:10:41.792 "adrfam": "IPv4", 00:10:41.792 "traddr": "10.0.0.1", 00:10:41.792 "trsvcid": "45898" 00:10:41.792 }, 00:10:41.792 "auth": { 00:10:41.792 "state": "completed", 00:10:41.792 "digest": "sha256", 00:10:41.792 "dhgroup": "ffdhe8192" 00:10:41.792 } 00:10:41.792 } 00:10:41.792 ]' 00:10:41.793 03:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:41.793 03:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:41.793 03:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:42.051 03:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:42.051 03:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:42.051 03:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:42.051 03:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:42.051 03:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:42.310 03:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:10:42.310 03:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:10:42.878 03:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:42.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:42.878 03:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:10:42.878 03:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.878 03:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.878 03:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.878 03:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:42.878 03:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:42.878 03:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:43.136 03:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:10:43.136 03:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:43.136 03:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:43.136 03:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:43.136 03:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:43.136 03:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:43.136 03:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key3 00:10:43.136 03:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.136 03:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.136 03:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.136 03:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:43.136 03:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:43.136 03:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:44.073 00:10:44.073 03:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:44.073 03:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:44.073 03:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:44.073 03:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:44.073 03:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:44.073 03:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.073 03:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.073 03:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.073 03:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:44.073 { 00:10:44.073 "cntlid": 47, 00:10:44.073 "qid": 0, 00:10:44.073 "state": "enabled", 00:10:44.073 "thread": "nvmf_tgt_poll_group_000", 00:10:44.073 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:10:44.073 "listen_address": { 00:10:44.073 "trtype": "TCP", 00:10:44.073 "adrfam": "IPv4", 00:10:44.073 "traddr": "10.0.0.3", 00:10:44.073 "trsvcid": "4420" 00:10:44.073 }, 00:10:44.073 "peer_address": { 00:10:44.073 "trtype": "TCP", 00:10:44.073 "adrfam": "IPv4", 00:10:44.073 "traddr": "10.0.0.1", 00:10:44.073 "trsvcid": "45918" 00:10:44.073 }, 00:10:44.073 "auth": { 00:10:44.073 "state": "completed", 00:10:44.073 "digest": "sha256", 00:10:44.073 "dhgroup": "ffdhe8192" 00:10:44.073 } 00:10:44.073 } 00:10:44.073 ]' 00:10:44.073 03:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:44.073 03:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:44.073 03:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:44.331 03:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:44.331 03:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:44.331 03:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:44.331 03:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:44.332 03:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:44.590 03:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:10:44.590 03:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:10:45.158 03:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:45.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:45.158 03:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:10:45.158 03:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.158 03:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.158 03:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.158 03:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:10:45.158 03:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:45.158 03:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:45.158 03:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:45.158 03:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:45.727 03:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:10:45.727 03:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:45.727 03:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:45.727 03:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:45.727 03:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:45.727 03:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:45.727 03:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.727 03:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.727 03:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.727 03:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.727 03:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.727 03:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.727 03:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.987 00:10:45.987 03:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:45.987 03:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:45.987 03:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:46.246 03:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:46.246 03:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:46.246 03:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.246 03:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.246 03:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.246 03:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:46.246 { 00:10:46.246 "cntlid": 49, 00:10:46.246 "qid": 0, 00:10:46.246 "state": "enabled", 00:10:46.246 "thread": "nvmf_tgt_poll_group_000", 00:10:46.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:10:46.246 "listen_address": { 00:10:46.246 "trtype": "TCP", 00:10:46.246 "adrfam": "IPv4", 00:10:46.246 "traddr": "10.0.0.3", 00:10:46.246 "trsvcid": "4420" 00:10:46.246 }, 00:10:46.246 "peer_address": { 00:10:46.246 "trtype": "TCP", 00:10:46.246 "adrfam": "IPv4", 00:10:46.246 "traddr": "10.0.0.1", 00:10:46.246 "trsvcid": "45944" 00:10:46.246 }, 00:10:46.246 "auth": { 00:10:46.246 "state": "completed", 00:10:46.246 "digest": "sha384", 00:10:46.246 "dhgroup": "null" 00:10:46.246 } 00:10:46.246 } 00:10:46.246 ]' 00:10:46.246 03:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:46.246 03:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:46.246 03:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:46.246 03:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:46.246 03:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:46.505 03:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:46.505 03:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:46.505 03:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:46.764 03:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:10:46.764 03:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:10:47.332 03:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:47.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:47.332 03:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:10:47.332 03:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.332 03:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.332 03:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.332 03:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:47.332 03:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:47.332 03:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:47.592 03:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:10:47.592 03:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:47.592 03:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:47.592 03:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:47.592 03:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:47.592 03:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:47.592 03:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.592 03:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.592 03:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.592 03:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.592 03:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.592 03:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.592 03:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.851 00:10:47.851 03:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:47.851 03:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:47.851 03:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:48.110 03:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:48.110 03:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:48.110 03:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.110 03:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.110 03:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.110 03:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:48.110 { 00:10:48.110 "cntlid": 51, 00:10:48.110 "qid": 0, 00:10:48.110 "state": "enabled", 00:10:48.110 "thread": "nvmf_tgt_poll_group_000", 00:10:48.110 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:10:48.110 "listen_address": { 00:10:48.110 "trtype": "TCP", 00:10:48.110 "adrfam": "IPv4", 00:10:48.110 "traddr": "10.0.0.3", 00:10:48.110 "trsvcid": "4420" 00:10:48.110 }, 00:10:48.110 "peer_address": { 00:10:48.110 "trtype": "TCP", 00:10:48.110 "adrfam": "IPv4", 00:10:48.110 "traddr": "10.0.0.1", 00:10:48.110 "trsvcid": "45960" 00:10:48.110 }, 00:10:48.110 "auth": { 00:10:48.110 "state": "completed", 00:10:48.110 "digest": "sha384", 00:10:48.110 "dhgroup": "null" 00:10:48.110 } 00:10:48.110 } 00:10:48.110 ]' 00:10:48.110 03:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:48.110 03:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:48.110 03:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:48.110 03:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:48.110 03:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:48.369 03:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:48.369 03:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:48.369 03:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:48.369 03:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:10:48.369 03:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:10:48.936 03:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:48.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:48.936 03:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:10:48.936 03:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.936 03:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.936 03:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.936 03:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:48.936 03:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:48.936 03:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:49.503 03:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:10:49.503 03:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:49.503 03:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:49.503 03:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:49.503 03:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:49.503 03:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:49.503 03:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.503 03:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.503 03:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.503 03:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.503 03:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.503 03:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.503 03:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.761 00:10:49.761 03:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:49.761 03:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:49.761 03:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:50.020 03:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:50.020 03:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:50.020 03:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.020 03:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.020 03:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.020 03:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:50.020 { 00:10:50.020 "cntlid": 53, 00:10:50.020 "qid": 0, 00:10:50.020 "state": "enabled", 00:10:50.020 "thread": "nvmf_tgt_poll_group_000", 00:10:50.020 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:10:50.020 "listen_address": { 00:10:50.020 "trtype": "TCP", 00:10:50.020 "adrfam": "IPv4", 00:10:50.020 "traddr": "10.0.0.3", 00:10:50.020 "trsvcid": "4420" 00:10:50.020 }, 00:10:50.020 "peer_address": { 00:10:50.020 "trtype": "TCP", 00:10:50.020 "adrfam": "IPv4", 00:10:50.020 "traddr": "10.0.0.1", 00:10:50.020 "trsvcid": "40014" 00:10:50.020 }, 00:10:50.020 "auth": { 00:10:50.020 "state": "completed", 00:10:50.020 "digest": "sha384", 00:10:50.020 "dhgroup": "null" 00:10:50.020 } 00:10:50.020 } 00:10:50.020 ]' 00:10:50.020 03:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:50.020 03:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:50.020 03:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:50.020 03:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:50.020 03:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:50.020 03:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:50.020 03:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:50.020 03:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:50.330 03:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:10:50.330 03:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:10:51.275 03:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:51.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:51.275 03:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:10:51.275 03:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.275 03:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.275 03:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.275 03:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:51.275 03:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:51.275 03:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:51.275 03:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:10:51.275 03:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:51.275 03:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:51.275 03:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:51.275 03:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:51.275 03:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:51.275 03:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key3 00:10:51.275 03:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.275 03:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.275 03:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.275 03:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:51.275 03:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:51.275 03:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:51.842 00:10:51.842 03:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:51.842 03:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:51.842 03:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:52.101 03:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:52.101 03:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:52.101 03:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.101 03:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.101 03:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.101 03:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:52.101 { 00:10:52.101 "cntlid": 55, 00:10:52.101 "qid": 0, 00:10:52.101 "state": "enabled", 00:10:52.101 "thread": "nvmf_tgt_poll_group_000", 00:10:52.101 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:10:52.101 "listen_address": { 00:10:52.101 "trtype": "TCP", 00:10:52.101 "adrfam": "IPv4", 00:10:52.101 "traddr": "10.0.0.3", 00:10:52.101 "trsvcid": "4420" 00:10:52.101 }, 00:10:52.101 "peer_address": { 00:10:52.101 "trtype": "TCP", 00:10:52.101 "adrfam": "IPv4", 00:10:52.101 "traddr": "10.0.0.1", 00:10:52.101 "trsvcid": "40044" 00:10:52.101 }, 00:10:52.101 "auth": { 00:10:52.101 "state": "completed", 00:10:52.101 "digest": "sha384", 00:10:52.101 "dhgroup": "null" 00:10:52.101 } 00:10:52.101 } 00:10:52.101 ]' 00:10:52.101 03:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:52.101 03:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:52.101 03:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:52.101 03:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:52.101 03:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:52.101 03:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:52.101 03:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:52.101 03:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:52.360 03:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:10:52.360 03:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:10:53.295 03:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:53.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:53.295 03:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:10:53.295 03:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.295 03:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.295 03:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.295 03:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:53.295 03:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:53.295 03:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:53.295 03:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:53.295 03:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:10:53.295 03:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:53.295 03:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:53.295 03:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:53.295 03:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:53.295 03:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:53.295 03:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.295 03:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.295 03:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.295 03:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.295 03:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.295 03:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.295 03:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.862 00:10:53.862 03:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:53.862 03:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:53.862 03:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:54.121 03:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:54.121 03:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:54.121 03:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.121 03:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.121 03:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.121 03:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:54.121 { 00:10:54.121 "cntlid": 57, 00:10:54.121 "qid": 0, 00:10:54.121 "state": "enabled", 00:10:54.121 "thread": "nvmf_tgt_poll_group_000", 00:10:54.121 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:10:54.121 "listen_address": { 00:10:54.121 "trtype": "TCP", 00:10:54.121 "adrfam": "IPv4", 00:10:54.121 "traddr": "10.0.0.3", 00:10:54.121 "trsvcid": "4420" 00:10:54.121 }, 00:10:54.121 "peer_address": { 00:10:54.121 "trtype": "TCP", 00:10:54.121 "adrfam": "IPv4", 00:10:54.121 "traddr": "10.0.0.1", 00:10:54.121 "trsvcid": "40072" 00:10:54.121 }, 00:10:54.121 "auth": { 00:10:54.121 "state": "completed", 00:10:54.121 "digest": "sha384", 00:10:54.121 "dhgroup": "ffdhe2048" 00:10:54.121 } 00:10:54.121 } 00:10:54.121 ]' 00:10:54.121 03:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:54.121 03:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:54.121 03:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:54.121 03:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:54.121 03:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:54.121 03:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:54.121 03:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:54.121 03:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:54.380 03:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:10:54.380 03:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:10:55.315 03:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:55.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:55.315 03:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:10:55.315 03:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.315 03:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.315 03:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.315 03:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:55.315 03:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:55.315 03:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:55.315 03:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:10:55.315 03:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:55.315 03:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:55.315 03:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:55.315 03:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:55.315 03:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:55.315 03:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.315 03:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.315 03:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.315 03:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.315 03:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.315 03:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.315 03:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.882 00:10:55.882 03:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:55.882 03:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:55.882 03:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:56.140 03:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:56.140 03:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:56.140 03:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.140 03:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.140 03:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.140 03:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:56.140 { 00:10:56.140 "cntlid": 59, 00:10:56.140 "qid": 0, 00:10:56.140 "state": "enabled", 00:10:56.140 "thread": "nvmf_tgt_poll_group_000", 00:10:56.140 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:10:56.140 "listen_address": { 00:10:56.140 "trtype": "TCP", 00:10:56.140 "adrfam": "IPv4", 00:10:56.140 "traddr": "10.0.0.3", 00:10:56.140 "trsvcid": "4420" 00:10:56.140 }, 00:10:56.140 "peer_address": { 00:10:56.140 "trtype": "TCP", 00:10:56.140 "adrfam": "IPv4", 00:10:56.140 "traddr": "10.0.0.1", 00:10:56.140 "trsvcid": "40084" 00:10:56.140 }, 00:10:56.140 "auth": { 00:10:56.140 "state": "completed", 00:10:56.140 "digest": "sha384", 00:10:56.140 "dhgroup": "ffdhe2048" 00:10:56.140 } 00:10:56.140 } 00:10:56.140 ]' 00:10:56.140 03:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:56.140 03:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:56.140 03:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:56.140 03:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:56.140 03:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:56.398 03:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:56.398 03:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:56.398 03:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:56.655 03:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:10:56.656 03:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:10:57.222 03:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:57.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:57.222 03:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:10:57.222 03:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.222 03:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.222 03:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.222 03:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:57.222 03:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:57.222 03:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:57.479 03:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:10:57.479 03:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:57.479 03:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:57.479 03:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:57.479 03:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:57.479 03:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:57.479 03:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.479 03:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.479 03:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.479 03:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.479 03:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.479 03:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.479 03:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:58.045 00:10:58.045 03:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:58.045 03:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:58.045 03:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:58.303 03:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:58.303 03:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:58.303 03:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.303 03:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.303 03:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.303 03:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:58.303 { 00:10:58.303 "cntlid": 61, 00:10:58.303 "qid": 0, 00:10:58.303 "state": "enabled", 00:10:58.303 "thread": "nvmf_tgt_poll_group_000", 00:10:58.303 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:10:58.303 "listen_address": { 00:10:58.303 "trtype": "TCP", 00:10:58.303 "adrfam": "IPv4", 00:10:58.303 "traddr": "10.0.0.3", 00:10:58.303 "trsvcid": "4420" 00:10:58.303 }, 00:10:58.303 "peer_address": { 00:10:58.303 "trtype": "TCP", 00:10:58.303 "adrfam": "IPv4", 00:10:58.303 "traddr": "10.0.0.1", 00:10:58.303 "trsvcid": "40106" 00:10:58.303 }, 00:10:58.303 "auth": { 00:10:58.303 "state": "completed", 00:10:58.303 "digest": "sha384", 00:10:58.303 "dhgroup": "ffdhe2048" 00:10:58.303 } 00:10:58.303 } 00:10:58.303 ]' 00:10:58.303 03:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:58.303 03:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:58.303 03:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:58.303 03:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:58.303 03:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:58.303 03:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:58.303 03:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:58.303 03:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:58.562 03:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:10:58.562 03:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:10:59.498 03:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:59.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:59.498 03:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:10:59.498 03:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.498 03:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.498 03:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.498 03:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:59.498 03:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:59.498 03:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:59.498 03:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:10:59.498 03:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:59.498 03:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:59.498 03:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:59.498 03:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:59.498 03:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:59.498 03:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key3 00:10:59.498 03:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.498 03:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.498 03:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.498 03:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:59.498 03:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:59.498 03:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:59.757 00:11:00.016 03:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:00.016 03:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:00.016 03:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:00.276 03:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:00.276 03:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:00.276 03:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.276 03:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.276 03:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.276 03:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:00.276 { 00:11:00.276 "cntlid": 63, 00:11:00.276 "qid": 0, 00:11:00.276 "state": "enabled", 00:11:00.276 "thread": "nvmf_tgt_poll_group_000", 00:11:00.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:11:00.276 "listen_address": { 00:11:00.276 "trtype": "TCP", 00:11:00.276 "adrfam": "IPv4", 00:11:00.276 "traddr": "10.0.0.3", 00:11:00.276 "trsvcid": "4420" 00:11:00.276 }, 00:11:00.276 "peer_address": { 00:11:00.276 "trtype": "TCP", 00:11:00.276 "adrfam": "IPv4", 00:11:00.276 "traddr": "10.0.0.1", 00:11:00.276 "trsvcid": "34514" 00:11:00.276 }, 00:11:00.276 "auth": { 00:11:00.276 "state": "completed", 00:11:00.276 "digest": "sha384", 00:11:00.276 "dhgroup": "ffdhe2048" 00:11:00.276 } 00:11:00.276 } 00:11:00.276 ]' 00:11:00.276 03:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:00.276 03:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:00.276 03:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:00.276 03:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:00.276 03:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:00.276 03:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:00.276 03:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:00.276 03:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:00.570 03:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:11:00.570 03:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:11:01.162 03:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:01.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:01.162 03:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:11:01.162 03:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.162 03:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.162 03:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.162 03:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:01.162 03:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:01.162 03:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:01.162 03:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:01.421 03:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:11:01.421 03:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:01.421 03:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:01.421 03:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:01.421 03:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:01.421 03:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:01.421 03:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.421 03:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.421 03:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.421 03:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.421 03:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.421 03:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.421 03:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.680 00:11:01.939 03:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:01.939 03:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:01.939 03:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:01.939 03:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:01.939 03:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:01.939 03:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.939 03:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.939 03:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.939 03:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:01.939 { 00:11:01.939 "cntlid": 65, 00:11:01.939 "qid": 0, 00:11:01.939 "state": "enabled", 00:11:01.939 "thread": "nvmf_tgt_poll_group_000", 00:11:01.939 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:11:01.939 "listen_address": { 00:11:01.939 "trtype": "TCP", 00:11:01.939 "adrfam": "IPv4", 00:11:01.939 "traddr": "10.0.0.3", 00:11:01.939 "trsvcid": "4420" 00:11:01.939 }, 00:11:01.939 "peer_address": { 00:11:01.939 "trtype": "TCP", 00:11:01.939 "adrfam": "IPv4", 00:11:01.939 "traddr": "10.0.0.1", 00:11:01.939 "trsvcid": "34532" 00:11:01.939 }, 00:11:01.939 "auth": { 00:11:01.939 "state": "completed", 00:11:01.939 "digest": "sha384", 00:11:01.939 "dhgroup": "ffdhe3072" 00:11:01.939 } 00:11:01.939 } 00:11:01.939 ]' 00:11:02.198 03:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:02.198 03:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:02.198 03:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:02.198 03:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:02.198 03:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:02.198 03:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:02.198 03:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:02.198 03:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:02.457 03:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:11:02.457 03:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:11:03.105 03:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:03.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:03.105 03:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:11:03.105 03:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.105 03:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.105 03:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.105 03:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:03.105 03:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:03.105 03:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:03.365 03:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:11:03.365 03:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:03.365 03:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:03.365 03:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:03.365 03:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:03.365 03:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:03.365 03:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.365 03:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.365 03:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.365 03:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.365 03:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.365 03:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.365 03:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.624 00:11:03.624 03:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:03.624 03:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:03.624 03:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:04.192 03:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:04.192 03:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:04.192 03:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.192 03:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.192 03:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.192 03:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:04.192 { 00:11:04.192 "cntlid": 67, 00:11:04.192 "qid": 0, 00:11:04.192 "state": "enabled", 00:11:04.192 "thread": "nvmf_tgt_poll_group_000", 00:11:04.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:11:04.192 "listen_address": { 00:11:04.192 "trtype": "TCP", 00:11:04.192 "adrfam": "IPv4", 00:11:04.192 "traddr": "10.0.0.3", 00:11:04.192 "trsvcid": "4420" 00:11:04.192 }, 00:11:04.192 "peer_address": { 00:11:04.192 "trtype": "TCP", 00:11:04.192 "adrfam": "IPv4", 00:11:04.192 "traddr": "10.0.0.1", 00:11:04.192 "trsvcid": "34566" 00:11:04.192 }, 00:11:04.192 "auth": { 00:11:04.192 "state": "completed", 00:11:04.192 "digest": "sha384", 00:11:04.192 "dhgroup": "ffdhe3072" 00:11:04.192 } 00:11:04.192 } 00:11:04.192 ]' 00:11:04.192 03:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:04.192 03:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:04.192 03:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:04.193 03:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:04.193 03:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:04.193 03:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:04.193 03:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:04.193 03:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:04.452 03:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:11:04.452 03:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:11:05.025 03:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:05.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:05.025 03:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:11:05.025 03:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.025 03:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.025 03:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.025 03:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:05.025 03:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:05.025 03:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:05.285 03:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:11:05.285 03:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:05.285 03:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:05.285 03:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:05.285 03:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:05.285 03:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:05.285 03:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:05.285 03:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.285 03:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.285 03:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.285 03:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:05.285 03:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:05.285 03:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:05.543 00:11:05.543 03:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:05.543 03:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:05.543 03:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:05.802 03:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:05.802 03:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:05.802 03:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.802 03:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.802 03:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.802 03:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:05.802 { 00:11:05.802 "cntlid": 69, 00:11:05.802 "qid": 0, 00:11:05.802 "state": "enabled", 00:11:05.802 "thread": "nvmf_tgt_poll_group_000", 00:11:05.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:11:05.802 "listen_address": { 00:11:05.802 "trtype": "TCP", 00:11:05.802 "adrfam": "IPv4", 00:11:05.802 "traddr": "10.0.0.3", 00:11:05.802 "trsvcid": "4420" 00:11:05.802 }, 00:11:05.802 "peer_address": { 00:11:05.802 "trtype": "TCP", 00:11:05.802 "adrfam": "IPv4", 00:11:05.802 "traddr": "10.0.0.1", 00:11:05.802 "trsvcid": "34590" 00:11:05.802 }, 00:11:05.802 "auth": { 00:11:05.802 "state": "completed", 00:11:05.802 "digest": "sha384", 00:11:05.802 "dhgroup": "ffdhe3072" 00:11:05.802 } 00:11:05.802 } 00:11:05.802 ]' 00:11:05.802 03:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:06.061 03:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:06.061 03:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:06.061 03:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:06.061 03:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:06.061 03:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:06.061 03:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:06.061 03:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:06.320 03:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:11:06.320 03:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:11:06.888 03:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:06.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:06.888 03:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:11:06.888 03:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.888 03:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.888 03:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.888 03:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:06.888 03:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:06.888 03:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:07.147 03:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:11:07.147 03:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:07.147 03:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:07.147 03:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:07.147 03:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:07.147 03:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:07.147 03:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key3 00:11:07.147 03:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.147 03:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.147 03:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.147 03:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:07.147 03:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:07.147 03:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:07.411 00:11:07.673 03:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:07.673 03:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:07.673 03:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:07.933 03:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:07.933 03:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:07.933 03:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.933 03:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.933 03:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.933 03:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:07.933 { 00:11:07.933 "cntlid": 71, 00:11:07.933 "qid": 0, 00:11:07.933 "state": "enabled", 00:11:07.933 "thread": "nvmf_tgt_poll_group_000", 00:11:07.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:11:07.933 "listen_address": { 00:11:07.933 "trtype": "TCP", 00:11:07.933 "adrfam": "IPv4", 00:11:07.933 "traddr": "10.0.0.3", 00:11:07.933 "trsvcid": "4420" 00:11:07.933 }, 00:11:07.933 "peer_address": { 00:11:07.933 "trtype": "TCP", 00:11:07.933 "adrfam": "IPv4", 00:11:07.933 "traddr": "10.0.0.1", 00:11:07.933 "trsvcid": "34608" 00:11:07.933 }, 00:11:07.933 "auth": { 00:11:07.933 "state": "completed", 00:11:07.933 "digest": "sha384", 00:11:07.933 "dhgroup": "ffdhe3072" 00:11:07.933 } 00:11:07.933 } 00:11:07.933 ]' 00:11:07.933 03:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:07.933 03:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:07.933 03:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:07.933 03:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:07.933 03:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:07.933 03:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.933 03:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.933 03:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:08.192 03:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:11:08.192 03:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:11:09.130 03:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:09.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:09.130 03:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:11:09.130 03:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.130 03:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.130 03:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.130 03:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:09.130 03:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:09.130 03:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:09.130 03:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:09.390 03:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:11:09.390 03:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:09.390 03:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:09.390 03:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:09.390 03:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:09.390 03:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:09.390 03:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:09.390 03:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.390 03:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.390 03:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.390 03:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:09.390 03:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:09.390 03:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:09.660 00:11:09.660 03:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:09.660 03:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:09.660 03:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:09.931 03:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:09.931 03:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:09.931 03:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.931 03:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.931 03:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.931 03:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:09.931 { 00:11:09.931 "cntlid": 73, 00:11:09.931 "qid": 0, 00:11:09.931 "state": "enabled", 00:11:09.931 "thread": "nvmf_tgt_poll_group_000", 00:11:09.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:11:09.931 "listen_address": { 00:11:09.931 "trtype": "TCP", 00:11:09.931 "adrfam": "IPv4", 00:11:09.931 "traddr": "10.0.0.3", 00:11:09.931 "trsvcid": "4420" 00:11:09.931 }, 00:11:09.931 "peer_address": { 00:11:09.931 "trtype": "TCP", 00:11:09.931 "adrfam": "IPv4", 00:11:09.931 "traddr": "10.0.0.1", 00:11:09.931 "trsvcid": "34642" 00:11:09.931 }, 00:11:09.931 "auth": { 00:11:09.931 "state": "completed", 00:11:09.931 "digest": "sha384", 00:11:09.931 "dhgroup": "ffdhe4096" 00:11:09.931 } 00:11:09.931 } 00:11:09.931 ]' 00:11:09.931 03:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:09.931 03:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:09.931 03:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:10.190 03:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:10.190 03:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:10.190 03:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:10.190 03:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:10.190 03:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:10.449 03:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:11:10.449 03:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:11:11.017 03:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:11.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:11.017 03:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:11:11.017 03:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.017 03:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.017 03:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.017 03:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:11.017 03:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:11.017 03:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:11.276 03:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:11:11.276 03:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:11.276 03:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:11.276 03:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:11.276 03:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:11.276 03:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:11.276 03:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:11.276 03:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.276 03:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.276 03:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.276 03:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:11.276 03:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:11.276 03:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:11.844 00:11:11.844 03:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:11.844 03:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:11.844 03:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.103 03:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:12.103 03:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:12.103 03:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.103 03:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.103 03:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.103 03:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:12.103 { 00:11:12.103 "cntlid": 75, 00:11:12.103 "qid": 0, 00:11:12.103 "state": "enabled", 00:11:12.103 "thread": "nvmf_tgt_poll_group_000", 00:11:12.103 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:11:12.103 "listen_address": { 00:11:12.103 "trtype": "TCP", 00:11:12.103 "adrfam": "IPv4", 00:11:12.103 "traddr": "10.0.0.3", 00:11:12.103 "trsvcid": "4420" 00:11:12.103 }, 00:11:12.103 "peer_address": { 00:11:12.103 "trtype": "TCP", 00:11:12.103 "adrfam": "IPv4", 00:11:12.103 "traddr": "10.0.0.1", 00:11:12.103 "trsvcid": "57012" 00:11:12.103 }, 00:11:12.103 "auth": { 00:11:12.103 "state": "completed", 00:11:12.103 "digest": "sha384", 00:11:12.103 "dhgroup": "ffdhe4096" 00:11:12.103 } 00:11:12.103 } 00:11:12.103 ]' 00:11:12.103 03:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:12.103 03:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:12.103 03:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:12.103 03:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:12.103 03:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:12.103 03:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:12.103 03:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:12.103 03:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:12.362 03:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:11:12.362 03:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:11:13.298 03:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:13.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:13.298 03:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:11:13.298 03:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.298 03:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.298 03:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.298 03:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:13.298 03:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:13.298 03:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:13.298 03:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:11:13.298 03:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:13.298 03:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:13.298 03:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:13.298 03:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:13.298 03:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:13.298 03:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:13.298 03:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.298 03:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.298 03:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.298 03:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:13.298 03:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:13.298 03:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:13.866 00:11:13.866 03:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:13.866 03:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:13.866 03:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.866 03:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.866 03:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.866 03:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.866 03:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.866 03:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.866 03:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:13.866 { 00:11:13.866 "cntlid": 77, 00:11:13.866 "qid": 0, 00:11:13.866 "state": "enabled", 00:11:13.866 "thread": "nvmf_tgt_poll_group_000", 00:11:13.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:11:13.866 "listen_address": { 00:11:13.866 "trtype": "TCP", 00:11:13.866 "adrfam": "IPv4", 00:11:13.866 "traddr": "10.0.0.3", 00:11:13.866 "trsvcid": "4420" 00:11:13.866 }, 00:11:13.866 "peer_address": { 00:11:13.866 "trtype": "TCP", 00:11:13.866 "adrfam": "IPv4", 00:11:13.866 "traddr": "10.0.0.1", 00:11:13.866 "trsvcid": "57030" 00:11:13.866 }, 00:11:13.866 "auth": { 00:11:13.866 "state": "completed", 00:11:13.866 "digest": "sha384", 00:11:13.866 "dhgroup": "ffdhe4096" 00:11:13.866 } 00:11:13.866 } 00:11:13.866 ]' 00:11:13.866 03:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:14.126 03:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:14.126 03:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:14.126 03:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:14.126 03:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:14.126 03:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:14.126 03:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:14.126 03:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:14.385 03:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:11:14.385 03:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:11:15.322 03:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:15.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:15.322 03:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:11:15.322 03:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.322 03:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.322 03:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.322 03:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:15.322 03:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:15.322 03:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:15.322 03:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:11:15.322 03:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:15.322 03:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:15.322 03:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:15.322 03:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:15.322 03:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.322 03:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key3 00:11:15.322 03:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.322 03:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.322 03:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.322 03:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:15.322 03:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:15.322 03:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:15.890 00:11:15.890 03:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:15.890 03:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:15.890 03:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:16.149 03:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:16.149 03:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:16.149 03:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.149 03:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.149 03:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.149 03:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:16.149 { 00:11:16.149 "cntlid": 79, 00:11:16.149 "qid": 0, 00:11:16.149 "state": "enabled", 00:11:16.149 "thread": "nvmf_tgt_poll_group_000", 00:11:16.149 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:11:16.149 "listen_address": { 00:11:16.149 "trtype": "TCP", 00:11:16.149 "adrfam": "IPv4", 00:11:16.149 "traddr": "10.0.0.3", 00:11:16.149 "trsvcid": "4420" 00:11:16.149 }, 00:11:16.149 "peer_address": { 00:11:16.149 "trtype": "TCP", 00:11:16.149 "adrfam": "IPv4", 00:11:16.149 "traddr": "10.0.0.1", 00:11:16.149 "trsvcid": "57042" 00:11:16.149 }, 00:11:16.149 "auth": { 00:11:16.149 "state": "completed", 00:11:16.149 "digest": "sha384", 00:11:16.149 "dhgroup": "ffdhe4096" 00:11:16.149 } 00:11:16.149 } 00:11:16.149 ]' 00:11:16.149 03:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:16.149 03:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:16.149 03:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:16.149 03:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:16.149 03:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:16.149 03:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:16.149 03:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:16.149 03:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:16.717 03:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:11:16.717 03:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:11:17.285 03:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:17.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:17.285 03:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:11:17.285 03:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.285 03:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.285 03:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.285 03:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:17.285 03:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:17.285 03:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:17.285 03:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:17.545 03:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:11:17.545 03:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:17.545 03:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:17.545 03:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:17.545 03:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:17.545 03:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:17.545 03:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:17.545 03:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.545 03:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.545 03:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.545 03:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:17.545 03:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:17.545 03:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:18.127 00:11:18.127 03:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:18.127 03:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:18.127 03:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:18.413 03:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:18.413 03:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:18.413 03:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.413 03:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.413 03:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.413 03:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:18.413 { 00:11:18.413 "cntlid": 81, 00:11:18.413 "qid": 0, 00:11:18.413 "state": "enabled", 00:11:18.413 "thread": "nvmf_tgt_poll_group_000", 00:11:18.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:11:18.413 "listen_address": { 00:11:18.413 "trtype": "TCP", 00:11:18.413 "adrfam": "IPv4", 00:11:18.413 "traddr": "10.0.0.3", 00:11:18.413 "trsvcid": "4420" 00:11:18.413 }, 00:11:18.413 "peer_address": { 00:11:18.413 "trtype": "TCP", 00:11:18.413 "adrfam": "IPv4", 00:11:18.413 "traddr": "10.0.0.1", 00:11:18.413 "trsvcid": "57066" 00:11:18.413 }, 00:11:18.413 "auth": { 00:11:18.413 "state": "completed", 00:11:18.413 "digest": "sha384", 00:11:18.413 "dhgroup": "ffdhe6144" 00:11:18.413 } 00:11:18.413 } 00:11:18.413 ]' 00:11:18.413 03:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:18.413 03:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:18.413 03:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:18.413 03:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:18.413 03:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:18.413 03:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:18.414 03:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:18.414 03:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:18.981 03:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:11:18.981 03:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:11:19.550 03:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:19.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:19.550 03:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:11:19.550 03:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.550 03:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.550 03:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.550 03:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:19.550 03:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:19.550 03:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:19.809 03:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:11:19.809 03:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:19.809 03:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:19.809 03:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:19.809 03:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:19.809 03:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:19.809 03:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:19.809 03:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.809 03:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.809 03:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.809 03:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:19.809 03:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:19.809 03:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.377 00:11:20.377 03:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:20.377 03:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:20.377 03:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:20.636 03:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:20.636 03:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:20.636 03:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.636 03:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.636 03:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.636 03:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:20.636 { 00:11:20.636 "cntlid": 83, 00:11:20.636 "qid": 0, 00:11:20.636 "state": "enabled", 00:11:20.636 "thread": "nvmf_tgt_poll_group_000", 00:11:20.636 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:11:20.636 "listen_address": { 00:11:20.636 "trtype": "TCP", 00:11:20.636 "adrfam": "IPv4", 00:11:20.636 "traddr": "10.0.0.3", 00:11:20.636 "trsvcid": "4420" 00:11:20.636 }, 00:11:20.636 "peer_address": { 00:11:20.636 "trtype": "TCP", 00:11:20.636 "adrfam": "IPv4", 00:11:20.636 "traddr": "10.0.0.1", 00:11:20.636 "trsvcid": "36036" 00:11:20.636 }, 00:11:20.636 "auth": { 00:11:20.636 "state": "completed", 00:11:20.636 "digest": "sha384", 00:11:20.636 "dhgroup": "ffdhe6144" 00:11:20.636 } 00:11:20.636 } 00:11:20.636 ]' 00:11:20.636 03:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:20.636 03:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:20.636 03:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:20.636 03:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:20.636 03:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:20.636 03:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:20.636 03:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:20.636 03:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.204 03:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:11:21.204 03:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:11:21.771 03:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:21.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:21.771 03:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:11:21.771 03:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.771 03:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.771 03:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.771 03:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:21.771 03:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:21.771 03:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:22.030 03:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:11:22.030 03:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:22.030 03:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:22.030 03:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:22.031 03:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:22.031 03:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.031 03:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.031 03:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.031 03:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.031 03:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.031 03:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.031 03:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.031 03:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.598 00:11:22.598 03:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:22.598 03:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:22.598 03:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:22.857 03:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:22.857 03:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:22.857 03:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.857 03:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.857 03:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.857 03:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:22.857 { 00:11:22.857 "cntlid": 85, 00:11:22.857 "qid": 0, 00:11:22.857 "state": "enabled", 00:11:22.857 "thread": "nvmf_tgt_poll_group_000", 00:11:22.857 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:11:22.857 "listen_address": { 00:11:22.857 "trtype": "TCP", 00:11:22.857 "adrfam": "IPv4", 00:11:22.857 "traddr": "10.0.0.3", 00:11:22.857 "trsvcid": "4420" 00:11:22.857 }, 00:11:22.857 "peer_address": { 00:11:22.857 "trtype": "TCP", 00:11:22.857 "adrfam": "IPv4", 00:11:22.857 "traddr": "10.0.0.1", 00:11:22.857 "trsvcid": "36070" 00:11:22.857 }, 00:11:22.857 "auth": { 00:11:22.857 "state": "completed", 00:11:22.857 "digest": "sha384", 00:11:22.857 "dhgroup": "ffdhe6144" 00:11:22.857 } 00:11:22.857 } 00:11:22.857 ]' 00:11:22.857 03:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:22.857 03:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:22.857 03:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:23.116 03:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:23.116 03:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:23.116 03:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.116 03:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.116 03:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:23.375 03:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:11:23.375 03:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:11:23.943 03:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:23.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:23.943 03:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:11:23.943 03:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.943 03:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.943 03:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.943 03:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:23.943 03:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:23.943 03:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:24.511 03:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:11:24.511 03:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:24.511 03:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:24.511 03:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:24.511 03:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:24.511 03:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:24.511 03:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key3 00:11:24.511 03:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.511 03:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.511 03:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.511 03:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:24.511 03:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:24.511 03:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:24.770 00:11:24.770 03:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:24.770 03:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:24.770 03:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.030 03:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.030 03:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.030 03:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.030 03:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.030 03:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.030 03:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:25.030 { 00:11:25.030 "cntlid": 87, 00:11:25.030 "qid": 0, 00:11:25.030 "state": "enabled", 00:11:25.030 "thread": "nvmf_tgt_poll_group_000", 00:11:25.030 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:11:25.030 "listen_address": { 00:11:25.030 "trtype": "TCP", 00:11:25.030 "adrfam": "IPv4", 00:11:25.030 "traddr": "10.0.0.3", 00:11:25.030 "trsvcid": "4420" 00:11:25.030 }, 00:11:25.030 "peer_address": { 00:11:25.030 "trtype": "TCP", 00:11:25.030 "adrfam": "IPv4", 00:11:25.030 "traddr": "10.0.0.1", 00:11:25.030 "trsvcid": "36096" 00:11:25.030 }, 00:11:25.030 "auth": { 00:11:25.030 "state": "completed", 00:11:25.030 "digest": "sha384", 00:11:25.030 "dhgroup": "ffdhe6144" 00:11:25.030 } 00:11:25.030 } 00:11:25.030 ]' 00:11:25.030 03:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:25.030 03:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:25.030 03:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:25.290 03:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:25.290 03:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:25.290 03:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:25.290 03:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:25.290 03:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:25.549 03:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:11:25.549 03:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:11:26.117 03:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.117 03:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:11:26.117 03:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.117 03:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.117 03:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.117 03:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:26.117 03:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:26.117 03:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:26.117 03:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:26.379 03:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:11:26.379 03:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:26.379 03:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:26.379 03:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:26.379 03:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:26.379 03:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:26.379 03:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.379 03:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.379 03:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.379 03:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.379 03:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.379 03:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.379 03:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.948 00:11:26.948 03:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:26.948 03:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:26.948 03:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.516 03:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.516 03:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.516 03:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.516 03:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.516 03:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.516 03:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:27.516 { 00:11:27.516 "cntlid": 89, 00:11:27.516 "qid": 0, 00:11:27.516 "state": "enabled", 00:11:27.516 "thread": "nvmf_tgt_poll_group_000", 00:11:27.516 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:11:27.516 "listen_address": { 00:11:27.516 "trtype": "TCP", 00:11:27.516 "adrfam": "IPv4", 00:11:27.516 "traddr": "10.0.0.3", 00:11:27.516 "trsvcid": "4420" 00:11:27.516 }, 00:11:27.516 "peer_address": { 00:11:27.516 "trtype": "TCP", 00:11:27.516 "adrfam": "IPv4", 00:11:27.516 "traddr": "10.0.0.1", 00:11:27.516 "trsvcid": "36118" 00:11:27.516 }, 00:11:27.516 "auth": { 00:11:27.516 "state": "completed", 00:11:27.516 "digest": "sha384", 00:11:27.516 "dhgroup": "ffdhe8192" 00:11:27.516 } 00:11:27.516 } 00:11:27.516 ]' 00:11:27.516 03:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:27.516 03:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:27.516 03:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:27.516 03:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:27.516 03:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:27.516 03:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.516 03:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.516 03:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:27.775 03:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:11:27.775 03:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:11:28.712 03:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.712 03:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:11:28.712 03:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.712 03:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.712 03:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.712 03:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:28.712 03:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:28.712 03:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:28.712 03:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:11:28.712 03:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:28.712 03:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:28.712 03:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:28.712 03:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:28.712 03:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:28.712 03:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.712 03:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.712 03:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.712 03:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.712 03:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.712 03:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.712 03:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.280 00:11:29.280 03:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:29.280 03:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.280 03:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:29.539 03:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.539 03:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.539 03:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.539 03:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.798 03:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.798 03:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:29.798 { 00:11:29.798 "cntlid": 91, 00:11:29.798 "qid": 0, 00:11:29.798 "state": "enabled", 00:11:29.798 "thread": "nvmf_tgt_poll_group_000", 00:11:29.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:11:29.798 "listen_address": { 00:11:29.798 "trtype": "TCP", 00:11:29.798 "adrfam": "IPv4", 00:11:29.798 "traddr": "10.0.0.3", 00:11:29.798 "trsvcid": "4420" 00:11:29.798 }, 00:11:29.798 "peer_address": { 00:11:29.798 "trtype": "TCP", 00:11:29.798 "adrfam": "IPv4", 00:11:29.798 "traddr": "10.0.0.1", 00:11:29.798 "trsvcid": "36146" 00:11:29.798 }, 00:11:29.798 "auth": { 00:11:29.798 "state": "completed", 00:11:29.798 "digest": "sha384", 00:11:29.798 "dhgroup": "ffdhe8192" 00:11:29.798 } 00:11:29.798 } 00:11:29.798 ]' 00:11:29.798 03:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:29.798 03:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:29.798 03:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:29.798 03:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:29.798 03:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:29.798 03:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.798 03:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.798 03:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.058 03:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:11:30.058 03:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:11:30.995 03:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.995 03:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:11:30.995 03:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.995 03:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.995 03:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.995 03:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:30.995 03:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:30.995 03:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:30.995 03:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:11:30.995 03:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:30.995 03:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:30.995 03:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:30.995 03:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:30.995 03:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.995 03:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.995 03:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.995 03:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.995 03:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.995 03:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.995 03:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.995 03:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:31.564 00:11:31.564 03:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:31.564 03:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:31.564 03:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.824 03:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.824 03:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.824 03:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.824 03:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.824 03:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.824 03:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:31.824 { 00:11:31.824 "cntlid": 93, 00:11:31.824 "qid": 0, 00:11:31.824 "state": "enabled", 00:11:31.824 "thread": "nvmf_tgt_poll_group_000", 00:11:31.824 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:11:31.824 "listen_address": { 00:11:31.824 "trtype": "TCP", 00:11:31.824 "adrfam": "IPv4", 00:11:31.824 "traddr": "10.0.0.3", 00:11:31.824 "trsvcid": "4420" 00:11:31.824 }, 00:11:31.824 "peer_address": { 00:11:31.824 "trtype": "TCP", 00:11:31.824 "adrfam": "IPv4", 00:11:31.824 "traddr": "10.0.0.1", 00:11:31.824 "trsvcid": "34968" 00:11:31.824 }, 00:11:31.824 "auth": { 00:11:31.824 "state": "completed", 00:11:31.824 "digest": "sha384", 00:11:31.824 "dhgroup": "ffdhe8192" 00:11:31.824 } 00:11:31.824 } 00:11:31.824 ]' 00:11:31.824 03:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:32.083 03:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:32.083 03:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:32.083 03:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:32.083 03:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:32.083 03:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:32.083 03:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:32.083 03:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.343 03:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:11:32.343 03:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:11:32.911 03:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.911 03:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:11:32.911 03:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.911 03:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.911 03:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.911 03:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:32.911 03:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:32.911 03:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:33.170 03:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:11:33.170 03:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:33.429 03:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:33.429 03:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:33.429 03:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:33.429 03:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.429 03:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key3 00:11:33.429 03:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.429 03:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.429 03:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.429 03:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:33.429 03:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:33.429 03:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:33.997 00:11:33.997 03:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:33.997 03:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.997 03:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:34.256 03:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:34.256 03:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:34.256 03:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.256 03:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.256 03:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.256 03:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:34.256 { 00:11:34.256 "cntlid": 95, 00:11:34.256 "qid": 0, 00:11:34.256 "state": "enabled", 00:11:34.256 "thread": "nvmf_tgt_poll_group_000", 00:11:34.256 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:11:34.256 "listen_address": { 00:11:34.256 "trtype": "TCP", 00:11:34.256 "adrfam": "IPv4", 00:11:34.256 "traddr": "10.0.0.3", 00:11:34.256 "trsvcid": "4420" 00:11:34.256 }, 00:11:34.256 "peer_address": { 00:11:34.256 "trtype": "TCP", 00:11:34.256 "adrfam": "IPv4", 00:11:34.256 "traddr": "10.0.0.1", 00:11:34.256 "trsvcid": "34982" 00:11:34.256 }, 00:11:34.256 "auth": { 00:11:34.256 "state": "completed", 00:11:34.256 "digest": "sha384", 00:11:34.256 "dhgroup": "ffdhe8192" 00:11:34.256 } 00:11:34.256 } 00:11:34.256 ]' 00:11:34.256 03:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:34.256 03:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:34.256 03:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:34.256 03:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:34.256 03:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:34.256 03:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:34.256 03:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:34.256 03:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.831 03:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:11:34.831 03:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:11:35.421 03:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.421 03:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:11:35.421 03:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.421 03:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.421 03:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.421 03:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:35.421 03:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:35.421 03:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:35.421 03:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:35.421 03:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:35.680 03:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:11:35.680 03:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:35.680 03:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:35.680 03:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:35.680 03:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:35.680 03:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.680 03:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.680 03:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.680 03:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.680 03:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.680 03:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.680 03:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.680 03:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.939 00:11:35.939 03:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:35.939 03:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.939 03:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:36.507 03:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.507 03:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.507 03:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.507 03:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.507 03:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.507 03:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:36.507 { 00:11:36.507 "cntlid": 97, 00:11:36.507 "qid": 0, 00:11:36.507 "state": "enabled", 00:11:36.507 "thread": "nvmf_tgt_poll_group_000", 00:11:36.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:11:36.507 "listen_address": { 00:11:36.507 "trtype": "TCP", 00:11:36.507 "adrfam": "IPv4", 00:11:36.507 "traddr": "10.0.0.3", 00:11:36.507 "trsvcid": "4420" 00:11:36.507 }, 00:11:36.507 "peer_address": { 00:11:36.507 "trtype": "TCP", 00:11:36.507 "adrfam": "IPv4", 00:11:36.507 "traddr": "10.0.0.1", 00:11:36.507 "trsvcid": "35012" 00:11:36.507 }, 00:11:36.507 "auth": { 00:11:36.507 "state": "completed", 00:11:36.507 "digest": "sha512", 00:11:36.507 "dhgroup": "null" 00:11:36.507 } 00:11:36.507 } 00:11:36.507 ]' 00:11:36.507 03:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:36.507 03:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:36.507 03:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:36.507 03:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:36.507 03:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:36.507 03:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:36.507 03:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:36.507 03:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:36.766 03:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:11:36.766 03:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:11:37.702 03:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.702 03:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:11:37.702 03:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.702 03:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.702 03:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.702 03:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:37.702 03:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:37.703 03:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:37.703 03:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:11:37.703 03:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:37.703 03:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:37.703 03:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:37.703 03:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:37.703 03:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.703 03:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.703 03:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.703 03:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.703 03:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.703 03:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.703 03:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.703 03:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.961 00:11:38.220 03:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:38.220 03:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.220 03:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:38.479 03:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.479 03:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.479 03:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.479 03:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.479 03:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.479 03:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:38.479 { 00:11:38.479 "cntlid": 99, 00:11:38.479 "qid": 0, 00:11:38.479 "state": "enabled", 00:11:38.479 "thread": "nvmf_tgt_poll_group_000", 00:11:38.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:11:38.479 "listen_address": { 00:11:38.479 "trtype": "TCP", 00:11:38.479 "adrfam": "IPv4", 00:11:38.479 "traddr": "10.0.0.3", 00:11:38.479 "trsvcid": "4420" 00:11:38.479 }, 00:11:38.479 "peer_address": { 00:11:38.479 "trtype": "TCP", 00:11:38.479 "adrfam": "IPv4", 00:11:38.479 "traddr": "10.0.0.1", 00:11:38.479 "trsvcid": "35038" 00:11:38.479 }, 00:11:38.479 "auth": { 00:11:38.479 "state": "completed", 00:11:38.479 "digest": "sha512", 00:11:38.479 "dhgroup": "null" 00:11:38.479 } 00:11:38.479 } 00:11:38.479 ]' 00:11:38.479 03:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:38.479 03:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:38.479 03:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:38.480 03:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:38.480 03:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:38.480 03:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.480 03:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.480 03:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.738 03:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:11:38.738 03:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:11:39.306 03:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.306 03:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:11:39.306 03:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.306 03:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.565 03:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.565 03:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:39.565 03:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:39.565 03:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:39.565 03:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:11:39.565 03:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:39.565 03:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:39.565 03:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:39.565 03:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:39.565 03:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.565 03:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.565 03:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.565 03:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.565 03:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.565 03:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.565 03:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.565 03:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:40.133 00:11:40.133 03:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:40.133 03:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.133 03:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:40.392 03:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.392 03:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.392 03:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.392 03:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.392 03:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.392 03:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:40.392 { 00:11:40.392 "cntlid": 101, 00:11:40.392 "qid": 0, 00:11:40.392 "state": "enabled", 00:11:40.392 "thread": "nvmf_tgt_poll_group_000", 00:11:40.392 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:11:40.392 "listen_address": { 00:11:40.392 "trtype": "TCP", 00:11:40.392 "adrfam": "IPv4", 00:11:40.392 "traddr": "10.0.0.3", 00:11:40.392 "trsvcid": "4420" 00:11:40.392 }, 00:11:40.392 "peer_address": { 00:11:40.392 "trtype": "TCP", 00:11:40.392 "adrfam": "IPv4", 00:11:40.392 "traddr": "10.0.0.1", 00:11:40.392 "trsvcid": "53368" 00:11:40.392 }, 00:11:40.392 "auth": { 00:11:40.392 "state": "completed", 00:11:40.392 "digest": "sha512", 00:11:40.392 "dhgroup": "null" 00:11:40.392 } 00:11:40.392 } 00:11:40.392 ]' 00:11:40.392 03:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:40.392 03:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:40.392 03:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:40.392 03:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:40.392 03:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:40.392 03:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.392 03:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.392 03:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:40.651 03:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:11:40.651 03:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:11:41.587 03:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.587 03:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:11:41.587 03:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.587 03:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.587 03:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.587 03:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:41.587 03:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:41.588 03:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:41.588 03:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:11:41.588 03:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:41.588 03:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:41.588 03:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:41.588 03:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:41.588 03:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:41.588 03:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key3 00:11:41.588 03:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.588 03:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.588 03:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.588 03:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:41.588 03:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:41.588 03:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:41.847 00:11:41.847 03:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:41.847 03:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:41.847 03:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:42.106 03:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.106 03:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.106 03:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.106 03:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.106 03:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.106 03:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:42.106 { 00:11:42.106 "cntlid": 103, 00:11:42.106 "qid": 0, 00:11:42.106 "state": "enabled", 00:11:42.106 "thread": "nvmf_tgt_poll_group_000", 00:11:42.106 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:11:42.106 "listen_address": { 00:11:42.106 "trtype": "TCP", 00:11:42.106 "adrfam": "IPv4", 00:11:42.106 "traddr": "10.0.0.3", 00:11:42.106 "trsvcid": "4420" 00:11:42.106 }, 00:11:42.106 "peer_address": { 00:11:42.106 "trtype": "TCP", 00:11:42.106 "adrfam": "IPv4", 00:11:42.106 "traddr": "10.0.0.1", 00:11:42.106 "trsvcid": "53390" 00:11:42.106 }, 00:11:42.106 "auth": { 00:11:42.106 "state": "completed", 00:11:42.106 "digest": "sha512", 00:11:42.106 "dhgroup": "null" 00:11:42.106 } 00:11:42.106 } 00:11:42.106 ]' 00:11:42.365 03:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:42.365 03:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:42.365 03:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:42.365 03:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:42.365 03:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:42.365 03:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.365 03:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.365 03:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.623 03:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:11:42.623 03:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:11:43.190 03:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.190 03:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:11:43.190 03:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.190 03:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.190 03:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.190 03:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:43.190 03:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:43.190 03:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:43.190 03:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:43.757 03:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:11:43.757 03:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:43.758 03:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:43.758 03:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:43.758 03:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:43.758 03:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.758 03:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.758 03:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.758 03:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.758 03:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.758 03:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.758 03:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.758 03:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.017 00:11:44.017 03:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:44.017 03:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:44.017 03:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.276 03:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.276 03:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.276 03:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.276 03:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.276 03:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.276 03:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:44.276 { 00:11:44.276 "cntlid": 105, 00:11:44.276 "qid": 0, 00:11:44.276 "state": "enabled", 00:11:44.276 "thread": "nvmf_tgt_poll_group_000", 00:11:44.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:11:44.276 "listen_address": { 00:11:44.276 "trtype": "TCP", 00:11:44.276 "adrfam": "IPv4", 00:11:44.276 "traddr": "10.0.0.3", 00:11:44.276 "trsvcid": "4420" 00:11:44.276 }, 00:11:44.276 "peer_address": { 00:11:44.276 "trtype": "TCP", 00:11:44.276 "adrfam": "IPv4", 00:11:44.276 "traddr": "10.0.0.1", 00:11:44.276 "trsvcid": "53416" 00:11:44.276 }, 00:11:44.276 "auth": { 00:11:44.276 "state": "completed", 00:11:44.276 "digest": "sha512", 00:11:44.276 "dhgroup": "ffdhe2048" 00:11:44.276 } 00:11:44.276 } 00:11:44.276 ]' 00:11:44.276 03:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:44.276 03:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:44.276 03:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:44.276 03:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:44.276 03:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:44.276 03:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.276 03:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.276 03:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.535 03:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:11:44.535 03:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:11:45.103 03:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.103 03:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:11:45.103 03:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.103 03:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.103 03:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.103 03:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:45.103 03:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:45.103 03:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:45.361 03:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:11:45.361 03:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:45.361 03:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:45.361 03:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:45.361 03:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:45.361 03:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.361 03:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.361 03:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.361 03:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.361 03:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.361 03:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.361 03:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.362 03:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.929 00:11:45.929 03:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:45.929 03:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:45.929 03:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.188 03:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.188 03:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.188 03:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.188 03:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.188 03:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.188 03:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:46.188 { 00:11:46.188 "cntlid": 107, 00:11:46.188 "qid": 0, 00:11:46.188 "state": "enabled", 00:11:46.188 "thread": "nvmf_tgt_poll_group_000", 00:11:46.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:11:46.188 "listen_address": { 00:11:46.188 "trtype": "TCP", 00:11:46.188 "adrfam": "IPv4", 00:11:46.188 "traddr": "10.0.0.3", 00:11:46.188 "trsvcid": "4420" 00:11:46.188 }, 00:11:46.188 "peer_address": { 00:11:46.188 "trtype": "TCP", 00:11:46.188 "adrfam": "IPv4", 00:11:46.188 "traddr": "10.0.0.1", 00:11:46.188 "trsvcid": "53448" 00:11:46.188 }, 00:11:46.188 "auth": { 00:11:46.188 "state": "completed", 00:11:46.188 "digest": "sha512", 00:11:46.188 "dhgroup": "ffdhe2048" 00:11:46.188 } 00:11:46.188 } 00:11:46.188 ]' 00:11:46.188 03:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:46.188 03:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:46.188 03:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:46.188 03:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:46.188 03:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:46.188 03:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:46.188 03:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:46.188 03:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.756 03:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:11:46.756 03:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:11:47.366 03:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.366 03:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:11:47.366 03:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.366 03:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.366 03:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.366 03:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:47.366 03:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:47.366 03:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:47.624 03:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:11:47.624 03:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:47.624 03:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:47.624 03:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:47.624 03:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:47.624 03:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.624 03:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.624 03:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.624 03:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.624 03:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.624 03:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.624 03:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.624 03:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.882 00:11:47.882 03:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:47.882 03:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:47.882 03:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.141 03:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.141 03:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:48.141 03:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.399 03:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.399 03:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.399 03:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:48.399 { 00:11:48.399 "cntlid": 109, 00:11:48.399 "qid": 0, 00:11:48.399 "state": "enabled", 00:11:48.399 "thread": "nvmf_tgt_poll_group_000", 00:11:48.399 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:11:48.399 "listen_address": { 00:11:48.399 "trtype": "TCP", 00:11:48.399 "adrfam": "IPv4", 00:11:48.399 "traddr": "10.0.0.3", 00:11:48.399 "trsvcid": "4420" 00:11:48.399 }, 00:11:48.399 "peer_address": { 00:11:48.399 "trtype": "TCP", 00:11:48.399 "adrfam": "IPv4", 00:11:48.399 "traddr": "10.0.0.1", 00:11:48.399 "trsvcid": "53474" 00:11:48.399 }, 00:11:48.399 "auth": { 00:11:48.399 "state": "completed", 00:11:48.399 "digest": "sha512", 00:11:48.399 "dhgroup": "ffdhe2048" 00:11:48.399 } 00:11:48.399 } 00:11:48.399 ]' 00:11:48.399 03:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:48.399 03:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:48.399 03:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:48.399 03:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:48.399 03:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:48.399 03:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.399 03:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.400 03:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.658 03:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:11:48.658 03:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:11:49.595 03:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.595 03:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:11:49.595 03:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.595 03:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.595 03:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.595 03:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:49.595 03:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:49.595 03:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:49.595 03:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:11:49.595 03:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:49.595 03:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:49.595 03:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:49.595 03:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:49.595 03:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:49.595 03:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key3 00:11:49.595 03:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.595 03:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.595 03:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.595 03:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:49.595 03:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:49.595 03:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:50.164 00:11:50.164 03:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:50.164 03:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:50.164 03:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.423 03:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.423 03:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.423 03:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.423 03:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.423 03:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.423 03:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:50.423 { 00:11:50.423 "cntlid": 111, 00:11:50.423 "qid": 0, 00:11:50.423 "state": "enabled", 00:11:50.423 "thread": "nvmf_tgt_poll_group_000", 00:11:50.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:11:50.423 "listen_address": { 00:11:50.423 "trtype": "TCP", 00:11:50.423 "adrfam": "IPv4", 00:11:50.423 "traddr": "10.0.0.3", 00:11:50.423 "trsvcid": "4420" 00:11:50.423 }, 00:11:50.423 "peer_address": { 00:11:50.423 "trtype": "TCP", 00:11:50.423 "adrfam": "IPv4", 00:11:50.423 "traddr": "10.0.0.1", 00:11:50.423 "trsvcid": "52472" 00:11:50.423 }, 00:11:50.423 "auth": { 00:11:50.423 "state": "completed", 00:11:50.423 "digest": "sha512", 00:11:50.423 "dhgroup": "ffdhe2048" 00:11:50.423 } 00:11:50.423 } 00:11:50.423 ]' 00:11:50.423 03:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:50.423 03:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:50.423 03:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:50.423 03:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:50.423 03:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:50.423 03:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.423 03:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.423 03:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.682 03:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:11:50.682 03:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:11:51.619 03:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.619 03:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:11:51.619 03:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.619 03:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.619 03:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.619 03:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:51.619 03:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:51.619 03:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:51.620 03:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:51.879 03:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:11:51.879 03:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:51.879 03:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:51.879 03:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:51.879 03:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:51.879 03:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:51.879 03:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.879 03:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.879 03:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.879 03:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.879 03:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.879 03:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.879 03:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.139 00:11:52.139 03:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:52.139 03:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.139 03:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:52.398 03:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.398 03:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.398 03:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.398 03:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.398 03:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.398 03:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:52.398 { 00:11:52.398 "cntlid": 113, 00:11:52.398 "qid": 0, 00:11:52.398 "state": "enabled", 00:11:52.398 "thread": "nvmf_tgt_poll_group_000", 00:11:52.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:11:52.398 "listen_address": { 00:11:52.398 "trtype": "TCP", 00:11:52.398 "adrfam": "IPv4", 00:11:52.398 "traddr": "10.0.0.3", 00:11:52.398 "trsvcid": "4420" 00:11:52.398 }, 00:11:52.398 "peer_address": { 00:11:52.398 "trtype": "TCP", 00:11:52.398 "adrfam": "IPv4", 00:11:52.398 "traddr": "10.0.0.1", 00:11:52.398 "trsvcid": "52492" 00:11:52.398 }, 00:11:52.398 "auth": { 00:11:52.398 "state": "completed", 00:11:52.398 "digest": "sha512", 00:11:52.398 "dhgroup": "ffdhe3072" 00:11:52.398 } 00:11:52.398 } 00:11:52.398 ]' 00:11:52.398 03:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:52.657 03:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:52.657 03:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:52.657 03:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:52.657 03:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:52.657 03:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.657 03:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.657 03:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:52.917 03:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:11:52.917 03:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:11:53.485 03:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:53.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:53.485 03:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:11:53.485 03:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.485 03:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.485 03:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.485 03:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:53.485 03:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:53.485 03:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:53.744 03:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:11:53.744 03:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:53.744 03:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:53.744 03:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:53.744 03:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:53.744 03:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:53.744 03:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.744 03:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.744 03:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.003 03:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.003 03:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.003 03:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.003 03:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.262 00:11:54.262 03:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:54.262 03:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:54.262 03:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.521 03:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:54.521 03:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:54.521 03:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.521 03:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.521 03:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.521 03:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:54.521 { 00:11:54.521 "cntlid": 115, 00:11:54.521 "qid": 0, 00:11:54.521 "state": "enabled", 00:11:54.521 "thread": "nvmf_tgt_poll_group_000", 00:11:54.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:11:54.521 "listen_address": { 00:11:54.521 "trtype": "TCP", 00:11:54.521 "adrfam": "IPv4", 00:11:54.521 "traddr": "10.0.0.3", 00:11:54.521 "trsvcid": "4420" 00:11:54.521 }, 00:11:54.521 "peer_address": { 00:11:54.521 "trtype": "TCP", 00:11:54.521 "adrfam": "IPv4", 00:11:54.521 "traddr": "10.0.0.1", 00:11:54.521 "trsvcid": "52510" 00:11:54.521 }, 00:11:54.521 "auth": { 00:11:54.521 "state": "completed", 00:11:54.522 "digest": "sha512", 00:11:54.522 "dhgroup": "ffdhe3072" 00:11:54.522 } 00:11:54.522 } 00:11:54.522 ]' 00:11:54.522 03:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:54.522 03:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:54.522 03:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:54.522 03:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:54.522 03:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:54.781 03:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:54.781 03:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:54.781 03:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.040 03:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:11:55.040 03:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:11:55.608 03:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:55.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:55.608 03:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:11:55.608 03:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.608 03:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.608 03:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.608 03:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:55.608 03:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:55.608 03:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:55.867 03:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:11:55.867 03:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:55.867 03:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:55.867 03:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:55.867 03:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:55.867 03:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:55.867 03:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.867 03:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.867 03:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.867 03:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.867 03:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.867 03:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.867 03:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.435 00:11:56.435 03:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:56.435 03:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:56.435 03:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:56.694 03:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:56.694 03:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:56.694 03:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.694 03:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.694 03:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.694 03:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:56.694 { 00:11:56.694 "cntlid": 117, 00:11:56.694 "qid": 0, 00:11:56.694 "state": "enabled", 00:11:56.694 "thread": "nvmf_tgt_poll_group_000", 00:11:56.694 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:11:56.695 "listen_address": { 00:11:56.695 "trtype": "TCP", 00:11:56.695 "adrfam": "IPv4", 00:11:56.695 "traddr": "10.0.0.3", 00:11:56.695 "trsvcid": "4420" 00:11:56.695 }, 00:11:56.695 "peer_address": { 00:11:56.695 "trtype": "TCP", 00:11:56.695 "adrfam": "IPv4", 00:11:56.695 "traddr": "10.0.0.1", 00:11:56.695 "trsvcid": "52524" 00:11:56.695 }, 00:11:56.695 "auth": { 00:11:56.695 "state": "completed", 00:11:56.695 "digest": "sha512", 00:11:56.695 "dhgroup": "ffdhe3072" 00:11:56.695 } 00:11:56.695 } 00:11:56.695 ]' 00:11:56.695 03:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:56.695 03:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:56.695 03:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:56.695 03:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:56.695 03:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:56.954 03:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:56.954 03:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:56.954 03:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.213 03:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:11:57.213 03:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:11:58.152 03:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.152 03:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:11:58.152 03:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.152 03:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.152 03:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.152 03:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:58.152 03:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:58.152 03:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:58.152 03:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:11:58.152 03:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:58.152 03:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:58.152 03:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:58.152 03:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:58.152 03:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.152 03:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key3 00:11:58.152 03:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.152 03:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.152 03:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.152 03:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:58.152 03:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:58.152 03:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:58.721 00:11:58.721 03:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:58.721 03:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:58.721 03:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:58.980 03:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:58.980 03:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:58.980 03:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.980 03:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.980 03:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.980 03:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:58.980 { 00:11:58.980 "cntlid": 119, 00:11:58.980 "qid": 0, 00:11:58.980 "state": "enabled", 00:11:58.980 "thread": "nvmf_tgt_poll_group_000", 00:11:58.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:11:58.980 "listen_address": { 00:11:58.980 "trtype": "TCP", 00:11:58.980 "adrfam": "IPv4", 00:11:58.980 "traddr": "10.0.0.3", 00:11:58.980 "trsvcid": "4420" 00:11:58.980 }, 00:11:58.980 "peer_address": { 00:11:58.980 "trtype": "TCP", 00:11:58.980 "adrfam": "IPv4", 00:11:58.980 "traddr": "10.0.0.1", 00:11:58.980 "trsvcid": "52548" 00:11:58.980 }, 00:11:58.980 "auth": { 00:11:58.980 "state": "completed", 00:11:58.980 "digest": "sha512", 00:11:58.980 "dhgroup": "ffdhe3072" 00:11:58.980 } 00:11:58.980 } 00:11:58.980 ]' 00:11:58.980 03:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:58.980 03:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:58.980 03:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:59.239 03:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:59.239 03:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:59.239 03:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.239 03:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.239 03:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.501 03:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:11:59.501 03:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:12:00.070 03:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.070 03:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:12:00.070 03:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.070 03:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.070 03:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.070 03:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:00.070 03:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:00.070 03:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:00.070 03:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:00.329 03:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:12:00.329 03:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:00.329 03:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:00.329 03:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:00.329 03:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:00.329 03:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.329 03:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.329 03:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.329 03:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.329 03:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.330 03:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.330 03:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.330 03:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.898 00:12:00.898 03:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:00.898 03:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:00.898 03:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.157 03:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.157 03:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.157 03:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.157 03:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.157 03:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.157 03:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:01.157 { 00:12:01.157 "cntlid": 121, 00:12:01.157 "qid": 0, 00:12:01.157 "state": "enabled", 00:12:01.157 "thread": "nvmf_tgt_poll_group_000", 00:12:01.157 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:12:01.157 "listen_address": { 00:12:01.157 "trtype": "TCP", 00:12:01.157 "adrfam": "IPv4", 00:12:01.157 "traddr": "10.0.0.3", 00:12:01.157 "trsvcid": "4420" 00:12:01.157 }, 00:12:01.157 "peer_address": { 00:12:01.157 "trtype": "TCP", 00:12:01.157 "adrfam": "IPv4", 00:12:01.157 "traddr": "10.0.0.1", 00:12:01.157 "trsvcid": "41758" 00:12:01.157 }, 00:12:01.157 "auth": { 00:12:01.157 "state": "completed", 00:12:01.158 "digest": "sha512", 00:12:01.158 "dhgroup": "ffdhe4096" 00:12:01.158 } 00:12:01.158 } 00:12:01.158 ]' 00:12:01.158 03:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:01.158 03:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:01.158 03:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:01.417 03:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:01.417 03:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:01.417 03:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.417 03:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.417 03:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.676 03:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:12:01.676 03:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:12:02.243 03:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.243 03:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:12:02.243 03:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.243 03:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.243 03:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.243 03:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:02.243 03:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:02.243 03:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:02.866 03:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:12:02.866 03:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:02.866 03:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:02.866 03:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:02.866 03:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:02.866 03:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.866 03:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.866 03:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.866 03:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.866 03:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.866 03:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.866 03:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.866 03:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.139 00:12:03.139 03:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:03.139 03:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.139 03:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:03.398 03:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.398 03:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.398 03:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.398 03:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.398 03:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.398 03:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:03.398 { 00:12:03.398 "cntlid": 123, 00:12:03.398 "qid": 0, 00:12:03.398 "state": "enabled", 00:12:03.398 "thread": "nvmf_tgt_poll_group_000", 00:12:03.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:12:03.398 "listen_address": { 00:12:03.398 "trtype": "TCP", 00:12:03.398 "adrfam": "IPv4", 00:12:03.398 "traddr": "10.0.0.3", 00:12:03.398 "trsvcid": "4420" 00:12:03.398 }, 00:12:03.398 "peer_address": { 00:12:03.398 "trtype": "TCP", 00:12:03.398 "adrfam": "IPv4", 00:12:03.398 "traddr": "10.0.0.1", 00:12:03.398 "trsvcid": "41774" 00:12:03.398 }, 00:12:03.398 "auth": { 00:12:03.398 "state": "completed", 00:12:03.398 "digest": "sha512", 00:12:03.398 "dhgroup": "ffdhe4096" 00:12:03.398 } 00:12:03.398 } 00:12:03.398 ]' 00:12:03.398 03:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:03.657 03:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:03.657 03:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:03.657 03:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:03.657 03:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:03.657 03:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.657 03:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.657 03:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.916 03:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:12:03.916 03:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:12:04.855 03:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:04.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:04.855 03:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:12:04.855 03:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.855 03:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.855 03:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.855 03:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:04.855 03:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:04.855 03:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:05.114 03:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:12:05.114 03:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:05.114 03:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:05.114 03:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:05.114 03:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:05.114 03:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.114 03:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.114 03:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.114 03:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.114 03:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.114 03:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.114 03:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.115 03:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.372 00:12:05.372 03:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:05.372 03:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:05.372 03:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.940 03:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:05.940 03:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:05.940 03:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.940 03:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.940 03:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.940 03:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:05.940 { 00:12:05.940 "cntlid": 125, 00:12:05.940 "qid": 0, 00:12:05.940 "state": "enabled", 00:12:05.940 "thread": "nvmf_tgt_poll_group_000", 00:12:05.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:12:05.940 "listen_address": { 00:12:05.940 "trtype": "TCP", 00:12:05.940 "adrfam": "IPv4", 00:12:05.940 "traddr": "10.0.0.3", 00:12:05.940 "trsvcid": "4420" 00:12:05.940 }, 00:12:05.940 "peer_address": { 00:12:05.940 "trtype": "TCP", 00:12:05.940 "adrfam": "IPv4", 00:12:05.940 "traddr": "10.0.0.1", 00:12:05.940 "trsvcid": "41792" 00:12:05.940 }, 00:12:05.940 "auth": { 00:12:05.940 "state": "completed", 00:12:05.940 "digest": "sha512", 00:12:05.940 "dhgroup": "ffdhe4096" 00:12:05.940 } 00:12:05.940 } 00:12:05.940 ]' 00:12:05.940 03:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:05.940 03:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:05.940 03:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:05.940 03:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:05.940 03:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:05.940 03:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.940 03:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.940 03:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.199 03:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:12:06.200 03:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:12:07.135 03:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.135 03:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:12:07.135 03:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.135 03:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.135 03:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.135 03:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:07.135 03:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:07.135 03:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:07.394 03:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:12:07.394 03:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:07.394 03:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:07.394 03:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:07.394 03:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:07.394 03:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.394 03:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key3 00:12:07.394 03:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.394 03:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.394 03:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.394 03:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:07.394 03:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:07.394 03:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:07.654 00:12:07.654 03:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:07.654 03:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:07.654 03:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:07.913 03:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.913 03:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.913 03:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.913 03:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.913 03:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.913 03:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:07.913 { 00:12:07.913 "cntlid": 127, 00:12:07.913 "qid": 0, 00:12:07.913 "state": "enabled", 00:12:07.913 "thread": "nvmf_tgt_poll_group_000", 00:12:07.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:12:07.914 "listen_address": { 00:12:07.914 "trtype": "TCP", 00:12:07.914 "adrfam": "IPv4", 00:12:07.914 "traddr": "10.0.0.3", 00:12:07.914 "trsvcid": "4420" 00:12:07.914 }, 00:12:07.914 "peer_address": { 00:12:07.914 "trtype": "TCP", 00:12:07.914 "adrfam": "IPv4", 00:12:07.914 "traddr": "10.0.0.1", 00:12:07.914 "trsvcid": "41828" 00:12:07.914 }, 00:12:07.914 "auth": { 00:12:07.914 "state": "completed", 00:12:07.914 "digest": "sha512", 00:12:07.914 "dhgroup": "ffdhe4096" 00:12:07.914 } 00:12:07.914 } 00:12:07.914 ]' 00:12:07.914 03:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:08.173 03:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:08.173 03:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:08.173 03:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:08.173 03:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:08.173 03:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.173 03:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.173 03:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.431 03:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:12:08.431 03:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:12:09.000 03:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.000 03:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:12:09.000 03:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.000 03:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.000 03:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.000 03:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:09.000 03:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:09.000 03:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:09.000 03:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:09.259 03:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:12:09.259 03:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:09.259 03:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:09.259 03:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:09.259 03:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:09.259 03:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.259 03:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.259 03:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.259 03:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.259 03:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.259 03:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.259 03:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.259 03:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.826 00:12:09.826 03:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:09.826 03:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.826 03:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:10.085 03:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.085 03:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.085 03:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.085 03:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.085 03:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.085 03:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:10.085 { 00:12:10.085 "cntlid": 129, 00:12:10.085 "qid": 0, 00:12:10.085 "state": "enabled", 00:12:10.085 "thread": "nvmf_tgt_poll_group_000", 00:12:10.085 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:12:10.085 "listen_address": { 00:12:10.085 "trtype": "TCP", 00:12:10.085 "adrfam": "IPv4", 00:12:10.085 "traddr": "10.0.0.3", 00:12:10.085 "trsvcid": "4420" 00:12:10.085 }, 00:12:10.085 "peer_address": { 00:12:10.085 "trtype": "TCP", 00:12:10.085 "adrfam": "IPv4", 00:12:10.085 "traddr": "10.0.0.1", 00:12:10.085 "trsvcid": "41858" 00:12:10.085 }, 00:12:10.085 "auth": { 00:12:10.085 "state": "completed", 00:12:10.085 "digest": "sha512", 00:12:10.085 "dhgroup": "ffdhe6144" 00:12:10.085 } 00:12:10.085 } 00:12:10.085 ]' 00:12:10.085 03:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:10.085 03:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:10.085 03:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:10.344 03:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:10.344 03:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:10.344 03:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.344 03:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.344 03:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.603 03:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:12:10.603 03:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:12:11.198 03:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.198 03:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:12:11.198 03:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.198 03:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.198 03:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.198 03:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:11.198 03:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:11.198 03:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:11.463 03:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:12:11.463 03:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:11.463 03:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:11.463 03:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:11.463 03:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:11.463 03:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.463 03:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.463 03:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.463 03:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.463 03:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.463 03:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.463 03:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.463 03:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:12.031 00:12:12.031 03:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:12.031 03:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.031 03:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:12.290 03:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.290 03:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.290 03:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.290 03:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.290 03:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.290 03:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:12.290 { 00:12:12.290 "cntlid": 131, 00:12:12.290 "qid": 0, 00:12:12.290 "state": "enabled", 00:12:12.290 "thread": "nvmf_tgt_poll_group_000", 00:12:12.290 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:12:12.290 "listen_address": { 00:12:12.290 "trtype": "TCP", 00:12:12.290 "adrfam": "IPv4", 00:12:12.290 "traddr": "10.0.0.3", 00:12:12.290 "trsvcid": "4420" 00:12:12.290 }, 00:12:12.290 "peer_address": { 00:12:12.290 "trtype": "TCP", 00:12:12.290 "adrfam": "IPv4", 00:12:12.290 "traddr": "10.0.0.1", 00:12:12.290 "trsvcid": "48626" 00:12:12.290 }, 00:12:12.290 "auth": { 00:12:12.290 "state": "completed", 00:12:12.290 "digest": "sha512", 00:12:12.290 "dhgroup": "ffdhe6144" 00:12:12.290 } 00:12:12.290 } 00:12:12.290 ]' 00:12:12.290 03:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:12.290 03:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:12.290 03:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:12.290 03:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:12.290 03:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:12.290 03:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.290 03:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.290 03:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.549 03:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:12:12.549 03:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:12:13.485 03:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.485 03:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:12:13.485 03:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.485 03:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.485 03:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.485 03:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:13.485 03:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:13.485 03:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:13.485 03:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:12:13.485 03:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:13.485 03:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:13.485 03:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:13.485 03:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:13.485 03:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.485 03:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.485 03:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.485 03:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.485 03:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.485 03:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.485 03:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.485 03:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:14.053 00:12:14.053 03:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:14.053 03:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:14.053 03:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.312 03:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.312 03:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.312 03:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.312 03:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.312 03:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.312 03:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:14.312 { 00:12:14.312 "cntlid": 133, 00:12:14.312 "qid": 0, 00:12:14.312 "state": "enabled", 00:12:14.312 "thread": "nvmf_tgt_poll_group_000", 00:12:14.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:12:14.312 "listen_address": { 00:12:14.312 "trtype": "TCP", 00:12:14.312 "adrfam": "IPv4", 00:12:14.312 "traddr": "10.0.0.3", 00:12:14.312 "trsvcid": "4420" 00:12:14.312 }, 00:12:14.312 "peer_address": { 00:12:14.312 "trtype": "TCP", 00:12:14.312 "adrfam": "IPv4", 00:12:14.312 "traddr": "10.0.0.1", 00:12:14.312 "trsvcid": "48656" 00:12:14.312 }, 00:12:14.312 "auth": { 00:12:14.312 "state": "completed", 00:12:14.312 "digest": "sha512", 00:12:14.312 "dhgroup": "ffdhe6144" 00:12:14.312 } 00:12:14.312 } 00:12:14.312 ]' 00:12:14.312 03:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:14.312 03:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:14.312 03:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:14.571 03:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:14.571 03:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:14.571 03:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.571 03:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.571 03:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.830 03:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:12:14.830 03:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:12:15.397 03:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.397 03:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:12:15.397 03:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.397 03:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.397 03:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.397 03:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:15.397 03:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:15.397 03:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:15.657 03:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:12:15.657 03:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:15.657 03:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:15.657 03:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:15.657 03:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:15.657 03:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.657 03:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key3 00:12:15.657 03:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.657 03:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.657 03:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.657 03:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:15.657 03:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:15.657 03:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:16.225 00:12:16.225 03:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:16.225 03:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:16.225 03:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.484 03:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.484 03:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.484 03:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.484 03:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.484 03:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.484 03:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:16.484 { 00:12:16.484 "cntlid": 135, 00:12:16.484 "qid": 0, 00:12:16.484 "state": "enabled", 00:12:16.484 "thread": "nvmf_tgt_poll_group_000", 00:12:16.484 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:12:16.484 "listen_address": { 00:12:16.484 "trtype": "TCP", 00:12:16.484 "adrfam": "IPv4", 00:12:16.484 "traddr": "10.0.0.3", 00:12:16.484 "trsvcid": "4420" 00:12:16.484 }, 00:12:16.484 "peer_address": { 00:12:16.484 "trtype": "TCP", 00:12:16.484 "adrfam": "IPv4", 00:12:16.484 "traddr": "10.0.0.1", 00:12:16.484 "trsvcid": "48700" 00:12:16.484 }, 00:12:16.484 "auth": { 00:12:16.484 "state": "completed", 00:12:16.484 "digest": "sha512", 00:12:16.484 "dhgroup": "ffdhe6144" 00:12:16.484 } 00:12:16.484 } 00:12:16.484 ]' 00:12:16.484 03:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:16.742 03:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:16.742 03:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:16.742 03:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:16.742 03:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:16.742 03:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.742 03:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.742 03:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.000 03:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:12:17.000 03:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:12:17.935 03:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.935 03:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:12:17.935 03:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.935 03:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.935 03:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.935 03:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:17.935 03:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:17.935 03:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:17.935 03:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:17.935 03:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:12:17.935 03:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:17.935 03:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:17.935 03:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:17.935 03:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:17.935 03:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.935 03:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.935 03:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.935 03:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.935 03:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.935 03:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.935 03:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.935 03:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:18.870 00:12:18.870 03:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:18.870 03:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:18.870 03:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.128 03:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.128 03:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.129 03:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.129 03:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.129 03:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.129 03:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:19.129 { 00:12:19.129 "cntlid": 137, 00:12:19.129 "qid": 0, 00:12:19.129 "state": "enabled", 00:12:19.129 "thread": "nvmf_tgt_poll_group_000", 00:12:19.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:12:19.129 "listen_address": { 00:12:19.129 "trtype": "TCP", 00:12:19.129 "adrfam": "IPv4", 00:12:19.129 "traddr": "10.0.0.3", 00:12:19.129 "trsvcid": "4420" 00:12:19.129 }, 00:12:19.129 "peer_address": { 00:12:19.129 "trtype": "TCP", 00:12:19.129 "adrfam": "IPv4", 00:12:19.129 "traddr": "10.0.0.1", 00:12:19.129 "trsvcid": "48740" 00:12:19.129 }, 00:12:19.129 "auth": { 00:12:19.129 "state": "completed", 00:12:19.129 "digest": "sha512", 00:12:19.129 "dhgroup": "ffdhe8192" 00:12:19.129 } 00:12:19.129 } 00:12:19.129 ]' 00:12:19.129 03:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:19.129 03:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:19.129 03:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:19.129 03:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:19.129 03:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:19.129 03:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.129 03:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.129 03:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.387 03:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:12:19.387 03:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:12:20.323 03:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.323 03:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:12:20.323 03:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.323 03:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.323 03:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.323 03:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:20.323 03:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:20.323 03:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:20.581 03:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:12:20.581 03:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:20.581 03:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:20.581 03:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:20.581 03:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:20.581 03:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.581 03:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.581 03:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.581 03:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.581 03:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.581 03:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.581 03:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.581 03:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.149 00:12:21.149 03:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:21.149 03:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:21.150 03:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.409 03:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.409 03:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.409 03:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.409 03:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.409 03:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.409 03:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:21.409 { 00:12:21.409 "cntlid": 139, 00:12:21.409 "qid": 0, 00:12:21.409 "state": "enabled", 00:12:21.409 "thread": "nvmf_tgt_poll_group_000", 00:12:21.409 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:12:21.409 "listen_address": { 00:12:21.409 "trtype": "TCP", 00:12:21.409 "adrfam": "IPv4", 00:12:21.409 "traddr": "10.0.0.3", 00:12:21.409 "trsvcid": "4420" 00:12:21.409 }, 00:12:21.409 "peer_address": { 00:12:21.409 "trtype": "TCP", 00:12:21.409 "adrfam": "IPv4", 00:12:21.409 "traddr": "10.0.0.1", 00:12:21.409 "trsvcid": "49962" 00:12:21.409 }, 00:12:21.409 "auth": { 00:12:21.409 "state": "completed", 00:12:21.409 "digest": "sha512", 00:12:21.409 "dhgroup": "ffdhe8192" 00:12:21.409 } 00:12:21.409 } 00:12:21.409 ]' 00:12:21.409 03:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:21.409 03:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:21.409 03:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:21.409 03:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:21.409 03:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:21.668 03:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.668 03:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.668 03:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.959 03:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:12:21.959 03:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: --dhchap-ctrl-secret DHHC-1:02:YjFiYjM2ZTFmNmQ0ZGY3YzkzMjUwNmE0MzZhNGYxZTg5ODk3MzcyYTRlNGRhNTYyEzJ1og==: 00:12:22.530 03:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.530 03:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:12:22.530 03:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.530 03:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.530 03:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.530 03:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:22.530 03:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:22.530 03:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:22.788 03:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:12:22.788 03:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:22.788 03:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:22.788 03:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:22.788 03:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:22.788 03:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.788 03:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:22.788 03:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.788 03:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.788 03:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.788 03:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:22.788 03:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:22.788 03:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:23.355 00:12:23.355 03:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:23.355 03:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.355 03:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:23.922 03:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.922 03:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.922 03:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.922 03:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.922 03:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.923 03:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:23.923 { 00:12:23.923 "cntlid": 141, 00:12:23.923 "qid": 0, 00:12:23.923 "state": "enabled", 00:12:23.923 "thread": "nvmf_tgt_poll_group_000", 00:12:23.923 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:12:23.923 "listen_address": { 00:12:23.923 "trtype": "TCP", 00:12:23.923 "adrfam": "IPv4", 00:12:23.923 "traddr": "10.0.0.3", 00:12:23.923 "trsvcid": "4420" 00:12:23.923 }, 00:12:23.923 "peer_address": { 00:12:23.923 "trtype": "TCP", 00:12:23.923 "adrfam": "IPv4", 00:12:23.923 "traddr": "10.0.0.1", 00:12:23.923 "trsvcid": "49978" 00:12:23.923 }, 00:12:23.923 "auth": { 00:12:23.923 "state": "completed", 00:12:23.923 "digest": "sha512", 00:12:23.923 "dhgroup": "ffdhe8192" 00:12:23.923 } 00:12:23.923 } 00:12:23.923 ]' 00:12:23.923 03:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:23.923 03:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:23.923 03:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:23.923 03:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:23.923 03:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:23.923 03:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.923 03:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.923 03:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.181 03:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:12:24.181 03:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:01:MTllN2RmODcwMDJjYTdhN2ZkNTQ3OGU5ZjcwNTllYje0p/Dm: 00:12:24.749 03:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.749 03:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:12:24.749 03:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.749 03:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.749 03:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.749 03:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:24.749 03:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:24.749 03:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:25.317 03:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:12:25.317 03:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:25.317 03:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:25.317 03:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:25.317 03:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:25.317 03:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.317 03:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key3 00:12:25.317 03:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.317 03:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.317 03:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.317 03:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:25.317 03:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:25.317 03:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:25.885 00:12:25.885 03:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:25.885 03:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:25.885 03:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.144 03:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.144 03:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.144 03:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.144 03:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.144 03:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.144 03:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:26.144 { 00:12:26.144 "cntlid": 143, 00:12:26.144 "qid": 0, 00:12:26.144 "state": "enabled", 00:12:26.144 "thread": "nvmf_tgt_poll_group_000", 00:12:26.144 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:12:26.144 "listen_address": { 00:12:26.144 "trtype": "TCP", 00:12:26.144 "adrfam": "IPv4", 00:12:26.144 "traddr": "10.0.0.3", 00:12:26.144 "trsvcid": "4420" 00:12:26.144 }, 00:12:26.144 "peer_address": { 00:12:26.144 "trtype": "TCP", 00:12:26.144 "adrfam": "IPv4", 00:12:26.144 "traddr": "10.0.0.1", 00:12:26.144 "trsvcid": "49996" 00:12:26.144 }, 00:12:26.144 "auth": { 00:12:26.144 "state": "completed", 00:12:26.144 "digest": "sha512", 00:12:26.144 "dhgroup": "ffdhe8192" 00:12:26.144 } 00:12:26.144 } 00:12:26.144 ]' 00:12:26.144 03:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:26.144 03:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:26.144 03:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:26.144 03:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:26.144 03:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:26.403 03:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.403 03:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.403 03:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.661 03:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:12:26.661 03:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:12:27.228 03:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.228 03:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:12:27.228 03:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.228 03:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.228 03:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.228 03:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:27.228 03:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:12:27.228 03:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:27.228 03:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:27.486 03:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:27.486 03:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:27.745 03:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:12:27.745 03:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:27.745 03:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:27.745 03:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:27.745 03:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:27.745 03:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.745 03:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.745 03:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.745 03:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.745 03:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.745 03:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.745 03:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.745 03:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:28.312 00:12:28.312 03:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:28.312 03:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:28.312 03:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.570 03:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.570 03:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.570 03:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.570 03:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.570 03:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.570 03:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:28.570 { 00:12:28.570 "cntlid": 145, 00:12:28.570 "qid": 0, 00:12:28.570 "state": "enabled", 00:12:28.570 "thread": "nvmf_tgt_poll_group_000", 00:12:28.570 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:12:28.570 "listen_address": { 00:12:28.570 "trtype": "TCP", 00:12:28.570 "adrfam": "IPv4", 00:12:28.570 "traddr": "10.0.0.3", 00:12:28.570 "trsvcid": "4420" 00:12:28.570 }, 00:12:28.570 "peer_address": { 00:12:28.570 "trtype": "TCP", 00:12:28.570 "adrfam": "IPv4", 00:12:28.570 "traddr": "10.0.0.1", 00:12:28.570 "trsvcid": "50036" 00:12:28.570 }, 00:12:28.570 "auth": { 00:12:28.570 "state": "completed", 00:12:28.570 "digest": "sha512", 00:12:28.570 "dhgroup": "ffdhe8192" 00:12:28.570 } 00:12:28.570 } 00:12:28.570 ]' 00:12:28.570 03:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:28.830 03:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:28.830 03:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:28.830 03:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:28.830 03:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:28.830 03:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.830 03:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.830 03:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.089 03:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:12:29.089 03:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:00:MTE1NjUyMzkwYTY2OTE5YzAxMGE1YzA3NDE0YTdmMGI5OTNiMjJlOGMyMjY5NDY083JO0g==: --dhchap-ctrl-secret DHHC-1:03:YjU2MWUzZjRjNGZmYWE5MWEwYTQ5NWZmNGMyOTcyYTc0Mjg4NzNlYjljYWJlZTYzYzUzMDliN2RhMTUwOWFjZvUGRj4=: 00:12:29.656 03:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.657 03:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:12:29.657 03:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.657 03:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.657 03:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.657 03:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key1 00:12:29.657 03:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.657 03:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.657 03:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.657 03:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:12:29.657 03:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:29.657 03:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:12:29.657 03:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:29.657 03:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:29.657 03:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:29.657 03:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:29.657 03:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:12:29.657 03:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:29.657 03:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:30.593 request: 00:12:30.593 { 00:12:30.593 "name": "nvme0", 00:12:30.593 "trtype": "tcp", 00:12:30.593 "traddr": "10.0.0.3", 00:12:30.593 "adrfam": "ipv4", 00:12:30.593 "trsvcid": "4420", 00:12:30.593 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:30.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:12:30.593 "prchk_reftag": false, 00:12:30.593 "prchk_guard": false, 00:12:30.593 "hdgst": false, 00:12:30.593 "ddgst": false, 00:12:30.593 "dhchap_key": "key2", 00:12:30.593 "allow_unrecognized_csi": false, 00:12:30.593 "method": "bdev_nvme_attach_controller", 00:12:30.593 "req_id": 1 00:12:30.593 } 00:12:30.593 Got JSON-RPC error response 00:12:30.593 response: 00:12:30.593 { 00:12:30.593 "code": -5, 00:12:30.593 "message": "Input/output error" 00:12:30.593 } 00:12:30.593 03:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:30.593 03:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:30.593 03:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:30.593 03:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:30.593 03:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:12:30.593 03:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.593 03:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.593 03:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.593 03:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.593 03:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.593 03:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.593 03:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.594 03:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:30.594 03:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:30.594 03:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:30.594 03:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:30.594 03:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:30.594 03:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:30.594 03:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:30.594 03:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:30.594 03:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:30.594 03:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:31.162 request: 00:12:31.162 { 00:12:31.162 "name": "nvme0", 00:12:31.162 "trtype": "tcp", 00:12:31.162 "traddr": "10.0.0.3", 00:12:31.162 "adrfam": "ipv4", 00:12:31.162 "trsvcid": "4420", 00:12:31.162 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:31.162 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:12:31.162 "prchk_reftag": false, 00:12:31.162 "prchk_guard": false, 00:12:31.162 "hdgst": false, 00:12:31.162 "ddgst": false, 00:12:31.162 "dhchap_key": "key1", 00:12:31.162 "dhchap_ctrlr_key": "ckey2", 00:12:31.162 "allow_unrecognized_csi": false, 00:12:31.162 "method": "bdev_nvme_attach_controller", 00:12:31.162 "req_id": 1 00:12:31.162 } 00:12:31.162 Got JSON-RPC error response 00:12:31.162 response: 00:12:31.162 { 00:12:31.162 "code": -5, 00:12:31.162 "message": "Input/output error" 00:12:31.162 } 00:12:31.162 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:31.162 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:31.162 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:31.162 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:31.162 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:12:31.162 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.162 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.162 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.162 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key1 00:12:31.162 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.162 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.162 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.162 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:31.162 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:31.162 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:31.162 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:31.162 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:31.162 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:31.162 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:31.162 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:31.162 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:31.162 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:31.729 request: 00:12:31.729 { 00:12:31.729 "name": "nvme0", 00:12:31.729 "trtype": "tcp", 00:12:31.729 "traddr": "10.0.0.3", 00:12:31.729 "adrfam": "ipv4", 00:12:31.729 "trsvcid": "4420", 00:12:31.729 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:31.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:12:31.729 "prchk_reftag": false, 00:12:31.729 "prchk_guard": false, 00:12:31.729 "hdgst": false, 00:12:31.729 "ddgst": false, 00:12:31.729 "dhchap_key": "key1", 00:12:31.729 "dhchap_ctrlr_key": "ckey1", 00:12:31.729 "allow_unrecognized_csi": false, 00:12:31.729 "method": "bdev_nvme_attach_controller", 00:12:31.729 "req_id": 1 00:12:31.729 } 00:12:31.729 Got JSON-RPC error response 00:12:31.729 response: 00:12:31.729 { 00:12:31.729 "code": -5, 00:12:31.729 "message": "Input/output error" 00:12:31.729 } 00:12:31.729 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:31.729 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:31.729 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:31.729 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:31.729 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:12:31.729 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.729 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.729 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.729 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67400 00:12:31.729 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 67400 ']' 00:12:31.729 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 67400 00:12:31.729 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:12:31.729 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:31.729 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67400 00:12:31.729 killing process with pid 67400 00:12:31.729 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:31.729 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:31.729 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67400' 00:12:31.729 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 67400 00:12:31.729 03:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 67400 00:12:31.988 03:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:12:31.988 03:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:31.988 03:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:31.988 03:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.988 03:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=70467 00:12:31.988 03:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:12:31.988 03:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 70467 00:12:31.988 03:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 70467 ']' 00:12:31.988 03:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.988 03:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:31.988 03:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.988 03:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:31.988 03:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.363 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:33.364 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:12:33.364 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:33.364 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:33.364 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.364 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.364 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:33.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.364 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70467 00:12:33.364 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 70467 ']' 00:12:33.364 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.364 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:33.364 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.364 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:33.364 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.364 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:33.364 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:12:33.364 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:12:33.364 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.364 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.623 null0 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.DwD 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.nL6 ]] 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.nL6 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.0o9 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Tvt ]] 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Tvt 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.NtK 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.VHD ]] 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.VHD 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.6IO 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key3 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:33.623 03:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:34.559 nvme0n1 00:12:34.559 03:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:34.559 03:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.559 03:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:34.818 03:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.818 03:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.818 03:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.818 03:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.819 03:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.819 03:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:34.819 { 00:12:34.819 "cntlid": 1, 00:12:34.819 "qid": 0, 00:12:34.819 "state": "enabled", 00:12:34.819 "thread": "nvmf_tgt_poll_group_000", 00:12:34.819 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:12:34.819 "listen_address": { 00:12:34.819 "trtype": "TCP", 00:12:34.819 "adrfam": "IPv4", 00:12:34.819 "traddr": "10.0.0.3", 00:12:34.819 "trsvcid": "4420" 00:12:34.819 }, 00:12:34.819 "peer_address": { 00:12:34.819 "trtype": "TCP", 00:12:34.819 "adrfam": "IPv4", 00:12:34.819 "traddr": "10.0.0.1", 00:12:34.819 "trsvcid": "47604" 00:12:34.819 }, 00:12:34.819 "auth": { 00:12:34.819 "state": "completed", 00:12:34.819 "digest": "sha512", 00:12:34.819 "dhgroup": "ffdhe8192" 00:12:34.819 } 00:12:34.819 } 00:12:34.819 ]' 00:12:34.819 03:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:35.078 03:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:35.078 03:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:35.078 03:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:35.078 03:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:35.078 03:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.078 03:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.078 03:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.336 03:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:12:35.336 03:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:12:36.273 03:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.273 03:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:12:36.273 03:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.273 03:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.273 03:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.273 03:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key3 00:12:36.273 03:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.273 03:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.273 03:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.273 03:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:12:36.273 03:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:12:36.539 03:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:12:36.539 03:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:36.539 03:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:12:36.539 03:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:36.539 03:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:36.539 03:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:36.539 03:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:36.539 03:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:36.539 03:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:36.539 03:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:36.803 request: 00:12:36.803 { 00:12:36.803 "name": "nvme0", 00:12:36.803 "trtype": "tcp", 00:12:36.803 "traddr": "10.0.0.3", 00:12:36.803 "adrfam": "ipv4", 00:12:36.803 "trsvcid": "4420", 00:12:36.803 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:36.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:12:36.803 "prchk_reftag": false, 00:12:36.803 "prchk_guard": false, 00:12:36.803 "hdgst": false, 00:12:36.803 "ddgst": false, 00:12:36.803 "dhchap_key": "key3", 00:12:36.803 "allow_unrecognized_csi": false, 00:12:36.803 "method": "bdev_nvme_attach_controller", 00:12:36.803 "req_id": 1 00:12:36.803 } 00:12:36.803 Got JSON-RPC error response 00:12:36.803 response: 00:12:36.803 { 00:12:36.803 "code": -5, 00:12:36.803 "message": "Input/output error" 00:12:36.803 } 00:12:36.803 03:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:36.803 03:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:36.803 03:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:36.803 03:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:36.803 03:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:12:36.803 03:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:12:36.803 03:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:36.803 03:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:37.062 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:12:37.062 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:37.062 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:12:37.062 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:37.062 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:37.062 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:37.062 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:37.062 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:37.062 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:37.062 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:37.321 request: 00:12:37.321 { 00:12:37.321 "name": "nvme0", 00:12:37.321 "trtype": "tcp", 00:12:37.321 "traddr": "10.0.0.3", 00:12:37.321 "adrfam": "ipv4", 00:12:37.321 "trsvcid": "4420", 00:12:37.321 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:37.321 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:12:37.321 "prchk_reftag": false, 00:12:37.321 "prchk_guard": false, 00:12:37.321 "hdgst": false, 00:12:37.321 "ddgst": false, 00:12:37.321 "dhchap_key": "key3", 00:12:37.321 "allow_unrecognized_csi": false, 00:12:37.321 "method": "bdev_nvme_attach_controller", 00:12:37.321 "req_id": 1 00:12:37.321 } 00:12:37.321 Got JSON-RPC error response 00:12:37.321 response: 00:12:37.321 { 00:12:37.321 "code": -5, 00:12:37.321 "message": "Input/output error" 00:12:37.321 } 00:12:37.321 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:37.321 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:37.321 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:37.321 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:37.321 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:12:37.321 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:12:37.321 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:12:37.321 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:37.321 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:37.321 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:37.579 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:12:37.579 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.579 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.579 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.579 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:12:37.579 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.579 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.579 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.579 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:37.579 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:37.579 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:37.579 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:37.579 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:37.579 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:37.579 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:37.579 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:37.579 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:37.579 03:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:38.146 request: 00:12:38.146 { 00:12:38.146 "name": "nvme0", 00:12:38.147 "trtype": "tcp", 00:12:38.147 "traddr": "10.0.0.3", 00:12:38.147 "adrfam": "ipv4", 00:12:38.147 "trsvcid": "4420", 00:12:38.147 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:38.147 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:12:38.147 "prchk_reftag": false, 00:12:38.147 "prchk_guard": false, 00:12:38.147 "hdgst": false, 00:12:38.147 "ddgst": false, 00:12:38.147 "dhchap_key": "key0", 00:12:38.147 "dhchap_ctrlr_key": "key1", 00:12:38.147 "allow_unrecognized_csi": false, 00:12:38.147 "method": "bdev_nvme_attach_controller", 00:12:38.147 "req_id": 1 00:12:38.147 } 00:12:38.147 Got JSON-RPC error response 00:12:38.147 response: 00:12:38.147 { 00:12:38.147 "code": -5, 00:12:38.147 "message": "Input/output error" 00:12:38.147 } 00:12:38.147 03:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:38.147 03:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:38.147 03:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:38.147 03:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:38.147 03:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:12:38.147 03:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:12:38.147 03:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:12:38.406 nvme0n1 00:12:38.406 03:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:12:38.406 03:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:12:38.406 03:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.665 03:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.665 03:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.665 03:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:39.232 03:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key1 00:12:39.233 03:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.233 03:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.233 03:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.233 03:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:12:39.233 03:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:39.233 03:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:40.169 nvme0n1 00:12:40.169 03:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:12:40.169 03:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:12:40.169 03:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.429 03:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.429 03:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:40.429 03:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.429 03:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.429 03:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.429 03:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:12:40.429 03:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.429 03:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:12:40.688 03:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.688 03:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:12:40.688 03:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid cb2c30f2-294c-46db-807f-ce0b3b357918 -l 0 --dhchap-secret DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: --dhchap-ctrl-secret DHHC-1:03:NGJiMThjNTI4MjYwZjlkZjk5YzBiZDk1N2Q1NjgyMzY4MTNiNmZiZGMxMDA0MjJmYzE3OWZlNjAyMzAxZjcxMapnBF0=: 00:12:41.257 03:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:12:41.257 03:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:12:41.257 03:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:12:41.257 03:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:12:41.516 03:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:12:41.516 03:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:12:41.516 03:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:12:41.516 03:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.516 03:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.775 03:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:12:41.775 03:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:41.775 03:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:12:41.775 03:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:41.775 03:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:41.775 03:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:41.775 03:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:41.775 03:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:12:41.775 03:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:41.775 03:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:42.343 request: 00:12:42.343 { 00:12:42.343 "name": "nvme0", 00:12:42.343 "trtype": "tcp", 00:12:42.343 "traddr": "10.0.0.3", 00:12:42.343 "adrfam": "ipv4", 00:12:42.343 "trsvcid": "4420", 00:12:42.343 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:42.343 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918", 00:12:42.343 "prchk_reftag": false, 00:12:42.343 "prchk_guard": false, 00:12:42.343 "hdgst": false, 00:12:42.343 "ddgst": false, 00:12:42.343 "dhchap_key": "key1", 00:12:42.343 "allow_unrecognized_csi": false, 00:12:42.343 "method": "bdev_nvme_attach_controller", 00:12:42.343 "req_id": 1 00:12:42.343 } 00:12:42.343 Got JSON-RPC error response 00:12:42.343 response: 00:12:42.343 { 00:12:42.343 "code": -5, 00:12:42.343 "message": "Input/output error" 00:12:42.343 } 00:12:42.343 03:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:42.343 03:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:42.343 03:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:42.343 03:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:42.343 03:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:42.343 03:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:42.343 03:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:43.280 nvme0n1 00:12:43.280 03:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:12:43.280 03:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:12:43.280 03:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.539 03:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.539 03:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.539 03:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.798 03:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:12:43.798 03:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.798 03:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.798 03:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.798 03:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:12:43.798 03:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:12:43.798 03:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:12:44.365 nvme0n1 00:12:44.365 03:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:12:44.365 03:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.365 03:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:12:44.365 03:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.365 03:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.365 03:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.624 03:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:44.624 03:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.624 03:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.624 03:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.624 03:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: '' 2s 00:12:44.624 03:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:12:44.624 03:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:12:44.624 03:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: 00:12:44.624 03:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:12:44.624 03:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:12:44.624 03:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:12:44.624 03:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: ]] 00:12:44.624 03:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZWJjMDdiNTFmOGNlODNjZTNiZWRhMjBiNmI5ZGZjZmEFwsWE: 00:12:44.624 03:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:12:44.624 03:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:12:44.624 03:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:12:47.156 03:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:12:47.156 03:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:12:47.156 03:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:12:47.156 03:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:12:47.156 03:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:12:47.156 03:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:12:47.156 03:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:12:47.156 03:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key1 --dhchap-ctrlr-key key2 00:12:47.156 03:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.156 03:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.156 03:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.156 03:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: 2s 00:12:47.156 03:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:12:47.156 03:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:12:47.156 03:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:12:47.156 03:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: 00:12:47.156 03:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:12:47.156 03:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:12:47.156 03:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:12:47.156 03:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: ]] 00:12:47.156 03:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZTI0MGYzNDkxMjFkYjA0YzBmNzdmNGU5OGM4ZWNkNTFkZDhjOGFkZDA4YWVlN2VmKrnzVw==: 00:12:47.156 03:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:12:47.156 03:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:12:49.062 03:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:12:49.062 03:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:12:49.062 03:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:12:49.062 03:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:12:49.062 03:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:12:49.062 03:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:12:49.062 03:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:12:49.062 03:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.062 03:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:49.062 03:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.062 03:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.062 03:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.062 03:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:49.062 03:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:49.062 03:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:50.017 nvme0n1 00:12:50.017 03:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:50.017 03:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.017 03:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.017 03:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.017 03:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:50.017 03:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:50.635 03:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:12:50.635 03:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.635 03:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:12:50.635 03:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:50.635 03:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:12:50.635 03:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.635 03:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.893 03:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.893 03:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:12:50.893 03:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:12:51.151 03:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:12:51.151 03:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:12:51.151 03:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.151 03:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.151 03:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:51.151 03:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.151 03:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.152 03:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.152 03:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:51.152 03:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:51.152 03:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:51.152 03:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:51.152 03:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:51.152 03:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:51.152 03:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:51.152 03:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:51.152 03:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:52.089 request: 00:12:52.089 { 00:12:52.089 "name": "nvme0", 00:12:52.089 "dhchap_key": "key1", 00:12:52.089 "dhchap_ctrlr_key": "key3", 00:12:52.089 "method": "bdev_nvme_set_keys", 00:12:52.089 "req_id": 1 00:12:52.089 } 00:12:52.089 Got JSON-RPC error response 00:12:52.089 response: 00:12:52.089 { 00:12:52.089 "code": -13, 00:12:52.089 "message": "Permission denied" 00:12:52.089 } 00:12:52.089 03:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:52.089 03:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:52.089 03:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:52.089 03:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:52.089 03:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:12:52.089 03:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.089 03:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:12:52.089 03:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:12:52.089 03:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:12:53.472 03:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:12:53.472 03:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:12:53.472 03:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.472 03:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:12:53.472 03:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:53.472 03:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.472 03:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.472 03:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.472 03:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:53.472 03:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:53.472 03:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:54.410 nvme0n1 00:12:54.410 03:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:54.410 03:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.410 03:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.410 03:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.410 03:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:54.410 03:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:54.410 03:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:54.410 03:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:54.410 03:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:54.410 03:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:54.410 03:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:54.410 03:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:54.410 03:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:54.979 request: 00:12:54.979 { 00:12:54.979 "name": "nvme0", 00:12:54.979 "dhchap_key": "key2", 00:12:54.979 "dhchap_ctrlr_key": "key0", 00:12:54.979 "method": "bdev_nvme_set_keys", 00:12:54.979 "req_id": 1 00:12:54.979 } 00:12:54.979 Got JSON-RPC error response 00:12:54.979 response: 00:12:54.979 { 00:12:54.979 "code": -13, 00:12:54.979 "message": "Permission denied" 00:12:54.979 } 00:12:54.979 03:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:54.979 03:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:54.979 03:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:54.979 03:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:54.979 03:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:12:54.979 03:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:12:54.979 03:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.238 03:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:12:55.238 03:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:12:56.616 03:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:12:56.616 03:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:12:56.616 03:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.616 03:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:12:56.616 03:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:12:56.616 03:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:12:56.616 03:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67432 00:12:56.616 03:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 67432 ']' 00:12:56.616 03:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 67432 00:12:56.616 03:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:12:56.616 03:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:56.616 03:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67432 00:12:56.616 killing process with pid 67432 00:12:56.616 03:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:56.616 03:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:56.616 03:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67432' 00:12:56.616 03:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 67432 00:12:56.616 03:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 67432 00:12:57.184 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:12:57.184 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:57.184 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:12:57.184 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:57.184 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:12:57.184 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:57.184 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:57.184 rmmod nvme_tcp 00:12:57.184 rmmod nvme_fabrics 00:12:57.184 rmmod nvme_keyring 00:12:57.184 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:57.184 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:12:57.184 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:12:57.184 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 70467 ']' 00:12:57.184 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 70467 00:12:57.184 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 70467 ']' 00:12:57.184 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 70467 00:12:57.184 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:12:57.184 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:57.184 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70467 00:12:57.184 killing process with pid 70467 00:12:57.184 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:57.184 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:57.184 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70467' 00:12:57.184 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 70467 00:12:57.184 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 70467 00:12:57.443 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:57.443 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:57.443 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:57.443 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:12:57.443 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:57.443 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:12:57.443 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:12:57.443 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:57.443 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:57.443 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:57.443 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:57.443 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:57.443 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:57.443 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:57.443 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:57.703 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:57.703 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:57.703 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:57.703 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:57.703 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:57.703 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:57.703 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:57.703 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:57.703 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.703 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:57.703 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.703 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:12:57.703 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.DwD /tmp/spdk.key-sha256.0o9 /tmp/spdk.key-sha384.NtK /tmp/spdk.key-sha512.6IO /tmp/spdk.key-sha512.nL6 /tmp/spdk.key-sha384.Tvt /tmp/spdk.key-sha256.VHD '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:12:57.703 00:12:57.703 real 3m11.016s 00:12:57.703 user 7m37.332s 00:12:57.703 sys 0m29.661s 00:12:57.703 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:57.703 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.703 ************************************ 00:12:57.703 END TEST nvmf_auth_target 00:12:57.703 ************************************ 00:12:57.703 03:15:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:12:57.703 03:15:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:57.703 03:15:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:57.703 03:15:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:57.703 03:15:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:57.703 ************************************ 00:12:57.703 START TEST nvmf_bdevio_no_huge 00:12:57.703 ************************************ 00:12:57.703 03:15:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:57.963 * Looking for test storage... 00:12:57.963 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:57.963 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:57.963 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:57.963 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:12:57.963 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:57.963 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:57.963 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:57.963 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:57.963 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:12:57.963 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:12:57.963 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:12:57.963 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:12:57.963 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:12:57.963 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:12:57.963 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:12:57.963 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:57.963 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:12:57.963 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:12:57.963 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:57.963 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:57.963 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:12:57.963 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:12:57.963 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:57.963 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:12:57.963 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:12:57.963 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:12:57.963 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:12:57.963 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:57.963 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:12:57.963 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:12:57.963 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:57.963 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:57.963 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:12:57.963 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:57.963 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:57.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.964 --rc genhtml_branch_coverage=1 00:12:57.964 --rc genhtml_function_coverage=1 00:12:57.964 --rc genhtml_legend=1 00:12:57.964 --rc geninfo_all_blocks=1 00:12:57.964 --rc geninfo_unexecuted_blocks=1 00:12:57.964 00:12:57.964 ' 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:57.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.964 --rc genhtml_branch_coverage=1 00:12:57.964 --rc genhtml_function_coverage=1 00:12:57.964 --rc genhtml_legend=1 00:12:57.964 --rc geninfo_all_blocks=1 00:12:57.964 --rc geninfo_unexecuted_blocks=1 00:12:57.964 00:12:57.964 ' 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:57.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.964 --rc genhtml_branch_coverage=1 00:12:57.964 --rc genhtml_function_coverage=1 00:12:57.964 --rc genhtml_legend=1 00:12:57.964 --rc geninfo_all_blocks=1 00:12:57.964 --rc geninfo_unexecuted_blocks=1 00:12:57.964 00:12:57.964 ' 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:57.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.964 --rc genhtml_branch_coverage=1 00:12:57.964 --rc genhtml_function_coverage=1 00:12:57.964 --rc genhtml_legend=1 00:12:57.964 --rc geninfo_all_blocks=1 00:12:57.964 --rc geninfo_unexecuted_blocks=1 00:12:57.964 00:12:57.964 ' 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:57.964 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@458 -- # nvmf_veth_init 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:57.964 Cannot find device "nvmf_init_br" 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:57.964 Cannot find device "nvmf_init_br2" 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:12:57.964 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:57.964 Cannot find device "nvmf_tgt_br" 00:12:57.965 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:12:57.965 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:57.965 Cannot find device "nvmf_tgt_br2" 00:12:57.965 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:12:57.965 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:58.224 Cannot find device "nvmf_init_br" 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:58.224 Cannot find device "nvmf_init_br2" 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:58.224 Cannot find device "nvmf_tgt_br" 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:58.224 Cannot find device "nvmf_tgt_br2" 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:58.224 Cannot find device "nvmf_br" 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:58.224 Cannot find device "nvmf_init_if" 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:58.224 Cannot find device "nvmf_init_if2" 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:58.224 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:58.224 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:58.224 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:58.484 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:58.484 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:12:58.484 00:12:58.484 --- 10.0.0.3 ping statistics --- 00:12:58.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.484 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:58.484 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:58.484 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:12:58.484 00:12:58.484 --- 10.0.0.4 ping statistics --- 00:12:58.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.484 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:58.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:58.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:12:58.484 00:12:58.484 --- 10.0.0.1 ping statistics --- 00:12:58.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.484 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:58.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:58.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:12:58.484 00:12:58.484 --- 10.0.0.2 ping statistics --- 00:12:58.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.484 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # return 0 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=71110 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 71110 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 71110 ']' 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:58.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:58.484 03:15:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:58.484 [2024-10-09 03:15:41.702786] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:12:58.484 [2024-10-09 03:15:41.702904] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:12:58.744 [2024-10-09 03:15:41.853478] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:58.744 [2024-10-09 03:15:42.015102] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:58.744 [2024-10-09 03:15:42.015193] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:58.744 [2024-10-09 03:15:42.015208] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:58.744 [2024-10-09 03:15:42.015227] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:58.744 [2024-10-09 03:15:42.015236] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:58.744 [2024-10-09 03:15:42.016331] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:12:58.744 [2024-10-09 03:15:42.016451] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:12:58.744 [2024-10-09 03:15:42.016585] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:12:58.744 [2024-10-09 03:15:42.016591] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:58.744 [2024-10-09 03:15:42.023111] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:59.682 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:59.682 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:12:59.682 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:59.682 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:59.682 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:59.682 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.682 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:59.682 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.682 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:59.683 [2024-10-09 03:15:42.799416] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:59.683 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.683 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:59.683 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.683 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:59.683 Malloc0 00:12:59.683 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.683 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:59.683 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.683 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:59.683 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.683 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:59.683 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.683 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:59.683 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.683 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:59.683 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.683 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:59.683 [2024-10-09 03:15:42.840558] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:59.683 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.683 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:12:59.683 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:59.683 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:12:59.683 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:12:59.683 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:12:59.683 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:12:59.683 { 00:12:59.683 "params": { 00:12:59.683 "name": "Nvme$subsystem", 00:12:59.683 "trtype": "$TEST_TRANSPORT", 00:12:59.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:59.683 "adrfam": "ipv4", 00:12:59.683 "trsvcid": "$NVMF_PORT", 00:12:59.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:59.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:59.683 "hdgst": ${hdgst:-false}, 00:12:59.683 "ddgst": ${ddgst:-false} 00:12:59.683 }, 00:12:59.683 "method": "bdev_nvme_attach_controller" 00:12:59.683 } 00:12:59.683 EOF 00:12:59.683 )") 00:12:59.683 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:12:59.683 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:12:59.683 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:12:59.683 03:15:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:12:59.683 "params": { 00:12:59.683 "name": "Nvme1", 00:12:59.683 "trtype": "tcp", 00:12:59.683 "traddr": "10.0.0.3", 00:12:59.683 "adrfam": "ipv4", 00:12:59.683 "trsvcid": "4420", 00:12:59.683 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:59.683 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:59.683 "hdgst": false, 00:12:59.683 "ddgst": false 00:12:59.683 }, 00:12:59.683 "method": "bdev_nvme_attach_controller" 00:12:59.683 }' 00:12:59.683 [2024-10-09 03:15:42.900218] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:12:59.683 [2024-10-09 03:15:42.900741] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid71150 ] 00:12:59.942 [2024-10-09 03:15:43.050230] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:59.942 [2024-10-09 03:15:43.203961] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.942 [2024-10-09 03:15:43.204099] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:59.942 [2024-10-09 03:15:43.204102] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.942 [2024-10-09 03:15:43.218510] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:00.201 I/O targets: 00:13:00.201 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:00.201 00:13:00.201 00:13:00.201 CUnit - A unit testing framework for C - Version 2.1-3 00:13:00.201 http://cunit.sourceforge.net/ 00:13:00.201 00:13:00.201 00:13:00.201 Suite: bdevio tests on: Nvme1n1 00:13:00.201 Test: blockdev write read block ...passed 00:13:00.201 Test: blockdev write zeroes read block ...passed 00:13:00.201 Test: blockdev write zeroes read no split ...passed 00:13:00.201 Test: blockdev write zeroes read split ...passed 00:13:00.201 Test: blockdev write zeroes read split partial ...passed 00:13:00.201 Test: blockdev reset ...[2024-10-09 03:15:43.457151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:00.201 [2024-10-09 03:15:43.457247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea0720 (9): Bad file descriptor 00:13:00.201 [2024-10-09 03:15:43.473289] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:00.201 passed 00:13:00.201 Test: blockdev write read 8 blocks ...passed 00:13:00.201 Test: blockdev write read size > 128k ...passed 00:13:00.201 Test: blockdev write read invalid size ...passed 00:13:00.201 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:00.201 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:00.201 Test: blockdev write read max offset ...passed 00:13:00.201 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:00.201 Test: blockdev writev readv 8 blocks ...passed 00:13:00.201 Test: blockdev writev readv 30 x 1block ...passed 00:13:00.201 Test: blockdev writev readv block ...passed 00:13:00.201 Test: blockdev writev readv size > 128k ...passed 00:13:00.201 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:00.201 Test: blockdev comparev and writev ...[2024-10-09 03:15:43.482213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:00.201 [2024-10-09 03:15:43.482265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:00.201 [2024-10-09 03:15:43.482285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:00.201 [2024-10-09 03:15:43.482297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:00.201 [2024-10-09 03:15:43.482756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:00.201 [2024-10-09 03:15:43.482787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:00.201 [2024-10-09 03:15:43.482805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:00.201 [2024-10-09 03:15:43.482816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:00.201 [2024-10-09 03:15:43.483174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:00.201 [2024-10-09 03:15:43.483199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:00.201 [2024-10-09 03:15:43.483217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:00.201 [2024-10-09 03:15:43.483227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:00.201 [2024-10-09 03:15:43.483601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:00.201 [2024-10-09 03:15:43.483631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:00.201 [2024-10-09 03:15:43.483648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:00.201 [2024-10-09 03:15:43.483659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:00.201 passed 00:13:00.201 Test: blockdev nvme passthru rw ...passed 00:13:00.201 Test: blockdev nvme passthru vendor specific ...[2024-10-09 03:15:43.484629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:00.201 [2024-10-09 03:15:43.484659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:00.201 [2024-10-09 03:15:43.484781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:00.201 [2024-10-09 03:15:43.484797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:00.201 [2024-10-09 03:15:43.484906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:00.201 [2024-10-09 03:15:43.484927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:00.201 [2024-10-09 03:15:43.485045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:00.201 [2024-10-09 03:15:43.485079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:00.201 passed 00:13:00.201 Test: blockdev nvme admin passthru ...passed 00:13:00.201 Test: blockdev copy ...passed 00:13:00.201 00:13:00.201 Run Summary: Type Total Ran Passed Failed Inactive 00:13:00.201 suites 1 1 n/a 0 0 00:13:00.201 tests 23 23 23 0 0 00:13:00.201 asserts 152 152 152 0 n/a 00:13:00.201 00:13:00.201 Elapsed time = 0.167 seconds 00:13:00.769 03:15:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.769 03:15:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.769 03:15:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:00.769 03:15:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.769 03:15:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:00.769 03:15:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:13:00.769 03:15:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:00.770 03:15:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:13:00.770 03:15:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:00.770 03:15:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:13:00.770 03:15:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:00.770 03:15:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:00.770 rmmod nvme_tcp 00:13:00.770 rmmod nvme_fabrics 00:13:00.770 rmmod nvme_keyring 00:13:00.770 03:15:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:00.770 03:15:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:13:00.770 03:15:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:13:00.770 03:15:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 71110 ']' 00:13:00.770 03:15:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 71110 00:13:00.770 03:15:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 71110 ']' 00:13:00.770 03:15:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 71110 00:13:00.770 03:15:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:13:00.770 03:15:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:00.770 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71110 00:13:00.770 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:13:00.770 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:13:00.770 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71110' 00:13:00.770 killing process with pid 71110 00:13:00.770 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 71110 00:13:00.770 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 71110 00:13:01.338 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:01.338 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:01.338 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:01.338 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:13:01.338 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:13:01.338 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:01.338 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:13:01.338 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:01.338 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:01.338 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:01.338 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:01.338 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:01.338 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:01.338 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:01.338 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:01.338 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:01.338 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:01.338 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:01.338 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:01.338 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:01.338 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:01.338 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:01.597 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:01.597 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.597 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:01.597 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.597 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:13:01.597 00:13:01.597 real 0m3.713s 00:13:01.597 user 0m11.145s 00:13:01.597 sys 0m1.517s 00:13:01.597 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:01.597 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:01.597 ************************************ 00:13:01.597 END TEST nvmf_bdevio_no_huge 00:13:01.597 ************************************ 00:13:01.597 03:15:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:01.597 03:15:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:01.597 03:15:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:01.597 03:15:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:01.597 ************************************ 00:13:01.597 START TEST nvmf_tls 00:13:01.597 ************************************ 00:13:01.597 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:01.597 * Looking for test storage... 00:13:01.597 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:01.597 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:01.597 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:13:01.597 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:01.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.858 --rc genhtml_branch_coverage=1 00:13:01.858 --rc genhtml_function_coverage=1 00:13:01.858 --rc genhtml_legend=1 00:13:01.858 --rc geninfo_all_blocks=1 00:13:01.858 --rc geninfo_unexecuted_blocks=1 00:13:01.858 00:13:01.858 ' 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:01.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.858 --rc genhtml_branch_coverage=1 00:13:01.858 --rc genhtml_function_coverage=1 00:13:01.858 --rc genhtml_legend=1 00:13:01.858 --rc geninfo_all_blocks=1 00:13:01.858 --rc geninfo_unexecuted_blocks=1 00:13:01.858 00:13:01.858 ' 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:01.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.858 --rc genhtml_branch_coverage=1 00:13:01.858 --rc genhtml_function_coverage=1 00:13:01.858 --rc genhtml_legend=1 00:13:01.858 --rc geninfo_all_blocks=1 00:13:01.858 --rc geninfo_unexecuted_blocks=1 00:13:01.858 00:13:01.858 ' 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:01.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.858 --rc genhtml_branch_coverage=1 00:13:01.858 --rc genhtml_function_coverage=1 00:13:01.858 --rc genhtml_legend=1 00:13:01.858 --rc geninfo_all_blocks=1 00:13:01.858 --rc geninfo_unexecuted_blocks=1 00:13:01.858 00:13:01.858 ' 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:01.858 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:01.858 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@458 -- # nvmf_veth_init 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:01.859 Cannot find device "nvmf_init_br" 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:01.859 Cannot find device "nvmf_init_br2" 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:13:01.859 03:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:01.859 Cannot find device "nvmf_tgt_br" 00:13:01.859 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:13:01.859 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:01.859 Cannot find device "nvmf_tgt_br2" 00:13:01.859 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:13:01.859 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:01.859 Cannot find device "nvmf_init_br" 00:13:01.859 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:13:01.859 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:01.859 Cannot find device "nvmf_init_br2" 00:13:01.859 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:13:01.859 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:01.859 Cannot find device "nvmf_tgt_br" 00:13:01.859 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:13:01.859 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:01.859 Cannot find device "nvmf_tgt_br2" 00:13:01.859 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:13:01.859 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:01.859 Cannot find device "nvmf_br" 00:13:01.859 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:13:01.859 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:01.859 Cannot find device "nvmf_init_if" 00:13:01.859 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:13:01.859 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:01.859 Cannot find device "nvmf_init_if2" 00:13:01.859 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:13:01.859 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:01.859 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:01.859 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:13:01.859 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:01.859 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:01.859 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:13:01.859 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:01.859 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:01.859 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:01.859 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:01.859 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:02.129 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:02.129 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:13:02.129 00:13:02.129 --- 10.0.0.3 ping statistics --- 00:13:02.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.129 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:02.129 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:02.129 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:13:02.129 00:13:02.129 --- 10.0.0.4 ping statistics --- 00:13:02.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.129 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:02.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:02.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:13:02.129 00:13:02.129 --- 10.0.0.1 ping statistics --- 00:13:02.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.129 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:02.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:02.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:13:02.129 00:13:02.129 --- 10.0.0.2 ping statistics --- 00:13:02.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.129 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # return 0 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=71393 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 71393 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71393 ']' 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:02.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:02.129 03:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:02.416 [2024-10-09 03:15:45.439817] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:13:02.416 [2024-10-09 03:15:45.439884] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.416 [2024-10-09 03:15:45.582629] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.416 [2024-10-09 03:15:45.694717] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.416 [2024-10-09 03:15:45.694789] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.416 [2024-10-09 03:15:45.694804] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.416 [2024-10-09 03:15:45.694815] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.416 [2024-10-09 03:15:45.694824] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.416 [2024-10-09 03:15:45.695300] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.354 03:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:03.354 03:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:03.354 03:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:03.354 03:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:03.354 03:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:03.354 03:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.354 03:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:13:03.354 03:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:03.613 true 00:13:03.613 03:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:03.613 03:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:13:03.872 03:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:13:03.872 03:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:13:03.872 03:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:04.131 03:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:04.131 03:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:13:04.390 03:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:13:04.390 03:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:13:04.390 03:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:04.649 03:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:13:04.649 03:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:04.908 03:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:13:04.908 03:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:13:04.908 03:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:04.908 03:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:13:05.167 03:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:13:05.167 03:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:13:05.167 03:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:05.425 03:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:13:05.425 03:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:05.684 03:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:13:05.684 03:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:13:05.684 03:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:05.943 03:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:05.943 03:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:13:06.202 03:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:13:06.202 03:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:13:06.202 03:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:06.202 03:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:06.202 03:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:13:06.202 03:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:13:06.202 03:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:13:06.202 03:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:13:06.202 03:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:13:06.202 03:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:06.202 03:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:06.202 03:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:06.202 03:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:13:06.202 03:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:13:06.202 03:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:13:06.202 03:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:13:06.202 03:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:13:06.202 03:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:06.202 03:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:06.202 03:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.Kqg8vpcSEg 00:13:06.202 03:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:13:06.202 03:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.mjd2lJBVDt 00:13:06.202 03:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:06.202 03:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:06.202 03:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Kqg8vpcSEg 00:13:06.202 03:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.mjd2lJBVDt 00:13:06.202 03:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:06.461 03:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:06.719 [2024-10-09 03:15:49.954292] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:06.720 03:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.Kqg8vpcSEg 00:13:06.720 03:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Kqg8vpcSEg 00:13:06.720 03:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:06.978 [2024-10-09 03:15:50.237023] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:06.978 03:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:07.237 03:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:07.496 [2024-10-09 03:15:50.765628] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:07.496 [2024-10-09 03:15:50.765858] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:07.496 03:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:07.756 malloc0 00:13:07.756 03:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:08.015 03:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Kqg8vpcSEg 00:13:08.274 03:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:08.533 03:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Kqg8vpcSEg 00:13:20.741 Initializing NVMe Controllers 00:13:20.741 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:20.741 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:20.741 Initialization complete. Launching workers. 00:13:20.741 ======================================================== 00:13:20.741 Latency(us) 00:13:20.741 Device Information : IOPS MiB/s Average min max 00:13:20.741 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9529.90 37.23 6717.38 1478.12 8326.81 00:13:20.741 ======================================================== 00:13:20.741 Total : 9529.90 37.23 6717.38 1478.12 8326.81 00:13:20.741 00:13:20.741 03:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Kqg8vpcSEg 00:13:20.741 03:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:20.741 03:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:20.741 03:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:20.741 03:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Kqg8vpcSEg 00:13:20.741 03:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:20.741 03:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71626 00:13:20.741 03:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:20.741 03:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:20.741 03:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71626 /var/tmp/bdevperf.sock 00:13:20.741 03:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71626 ']' 00:13:20.741 03:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:20.741 03:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:20.741 03:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:20.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:20.741 03:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:20.741 03:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:20.741 [2024-10-09 03:16:02.024816] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:13:20.741 [2024-10-09 03:16:02.024913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71626 ] 00:13:20.741 [2024-10-09 03:16:02.164465] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.741 [2024-10-09 03:16:02.260760] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.741 [2024-10-09 03:16:02.315710] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:20.741 03:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:20.741 03:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:20.741 03:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Kqg8vpcSEg 00:13:20.741 03:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:20.741 [2024-10-09 03:16:02.903563] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:20.741 TLSTESTn1 00:13:20.741 03:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:20.741 Running I/O for 10 seconds... 00:13:21.937 3723.00 IOPS, 14.54 MiB/s [2024-10-09T03:16:06.178Z] 4013.50 IOPS, 15.68 MiB/s [2024-10-09T03:16:07.115Z] 4111.33 IOPS, 16.06 MiB/s [2024-10-09T03:16:08.493Z] 4155.50 IOPS, 16.23 MiB/s [2024-10-09T03:16:09.431Z] 4177.80 IOPS, 16.32 MiB/s [2024-10-09T03:16:10.423Z] 4194.67 IOPS, 16.39 MiB/s [2024-10-09T03:16:11.360Z] 4197.71 IOPS, 16.40 MiB/s [2024-10-09T03:16:12.302Z] 4185.00 IOPS, 16.35 MiB/s [2024-10-09T03:16:13.241Z] 4178.56 IOPS, 16.32 MiB/s [2024-10-09T03:16:13.241Z] 4189.40 IOPS, 16.36 MiB/s 00:13:29.938 Latency(us) 00:13:29.938 [2024-10-09T03:16:13.241Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.938 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:29.938 Verification LBA range: start 0x0 length 0x2000 00:13:29.938 TLSTESTn1 : 10.02 4195.38 16.39 0.00 0.00 30455.08 5600.35 24784.52 00:13:29.938 [2024-10-09T03:16:13.241Z] =================================================================================================================== 00:13:29.938 [2024-10-09T03:16:13.241Z] Total : 4195.38 16.39 0.00 0.00 30455.08 5600.35 24784.52 00:13:29.938 { 00:13:29.938 "results": [ 00:13:29.938 { 00:13:29.938 "job": "TLSTESTn1", 00:13:29.938 "core_mask": "0x4", 00:13:29.938 "workload": "verify", 00:13:29.938 "status": "finished", 00:13:29.938 "verify_range": { 00:13:29.938 "start": 0, 00:13:29.938 "length": 8192 00:13:29.938 }, 00:13:29.938 "queue_depth": 128, 00:13:29.938 "io_size": 4096, 00:13:29.938 "runtime": 10.016253, 00:13:29.938 "iops": 4195.38124685948, 00:13:29.938 "mibps": 16.388207995544843, 00:13:29.938 "io_failed": 0, 00:13:29.938 "io_timeout": 0, 00:13:29.938 "avg_latency_us": 30455.076147991742, 00:13:29.938 "min_latency_us": 5600.349090909091, 00:13:29.938 "max_latency_us": 24784.523636363636 00:13:29.938 } 00:13:29.938 ], 00:13:29.938 "core_count": 1 00:13:29.938 } 00:13:29.938 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:29.938 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71626 00:13:29.938 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71626 ']' 00:13:29.938 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71626 00:13:29.938 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:29.938 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:29.938 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71626 00:13:29.938 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:29.938 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:29.938 killing process with pid 71626 00:13:29.938 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71626' 00:13:29.938 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71626 00:13:29.938 Received shutdown signal, test time was about 10.000000 seconds 00:13:29.938 00:13:29.938 Latency(us) 00:13:29.938 [2024-10-09T03:16:13.241Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.938 [2024-10-09T03:16:13.241Z] =================================================================================================================== 00:13:29.938 [2024-10-09T03:16:13.241Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:29.938 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71626 00:13:30.198 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mjd2lJBVDt 00:13:30.198 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:30.198 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mjd2lJBVDt 00:13:30.198 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:30.198 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:30.198 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:30.198 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:30.198 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mjd2lJBVDt 00:13:30.198 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:30.198 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:30.198 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:30.198 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.mjd2lJBVDt 00:13:30.198 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:30.198 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71753 00:13:30.198 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:30.198 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71753 /var/tmp/bdevperf.sock 00:13:30.198 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:30.198 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71753 ']' 00:13:30.198 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:30.198 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:30.198 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:30.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:30.198 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:30.198 03:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:30.457 [2024-10-09 03:16:13.533355] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:13:30.457 [2024-10-09 03:16:13.533488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71753 ] 00:13:30.457 [2024-10-09 03:16:13.671188] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.717 [2024-10-09 03:16:13.813473] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:30.717 [2024-10-09 03:16:13.887449] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:31.284 03:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:31.284 03:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:31.284 03:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.mjd2lJBVDt 00:13:31.542 03:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:31.802 [2024-10-09 03:16:15.025920] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:31.802 [2024-10-09 03:16:15.031926] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:31.802 [2024-10-09 03:16:15.032881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d84090 (107): Transport endpoint is not connected 00:13:31.802 [2024-10-09 03:16:15.033866] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d84090 (9): Bad file descriptor 00:13:31.802 [2024-10-09 03:16:15.034862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:31.802 [2024-10-09 03:16:15.034883] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:31.802 [2024-10-09 03:16:15.034904] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:13:31.802 [2024-10-09 03:16:15.034915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:31.802 request: 00:13:31.802 { 00:13:31.802 "name": "TLSTEST", 00:13:31.802 "trtype": "tcp", 00:13:31.802 "traddr": "10.0.0.3", 00:13:31.802 "adrfam": "ipv4", 00:13:31.802 "trsvcid": "4420", 00:13:31.802 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:31.802 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:31.802 "prchk_reftag": false, 00:13:31.802 "prchk_guard": false, 00:13:31.802 "hdgst": false, 00:13:31.802 "ddgst": false, 00:13:31.802 "psk": "key0", 00:13:31.802 "allow_unrecognized_csi": false, 00:13:31.802 "method": "bdev_nvme_attach_controller", 00:13:31.802 "req_id": 1 00:13:31.802 } 00:13:31.802 Got JSON-RPC error response 00:13:31.802 response: 00:13:31.802 { 00:13:31.802 "code": -5, 00:13:31.802 "message": "Input/output error" 00:13:31.802 } 00:13:31.802 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71753 00:13:31.802 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71753 ']' 00:13:31.802 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71753 00:13:31.802 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:31.802 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:31.802 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71753 00:13:31.802 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:31.802 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:31.802 killing process with pid 71753 00:13:31.802 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71753' 00:13:31.802 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71753 00:13:31.802 Received shutdown signal, test time was about 10.000000 seconds 00:13:31.802 00:13:31.802 Latency(us) 00:13:31.802 [2024-10-09T03:16:15.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.802 [2024-10-09T03:16:15.105Z] =================================================================================================================== 00:13:31.802 [2024-10-09T03:16:15.105Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:31.802 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71753 00:13:32.371 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:32.371 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:32.371 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:32.371 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:32.371 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:32.371 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Kqg8vpcSEg 00:13:32.371 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:32.371 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Kqg8vpcSEg 00:13:32.371 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:32.371 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:32.371 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:32.371 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:32.371 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Kqg8vpcSEg 00:13:32.371 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:32.371 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:32.371 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:13:32.371 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Kqg8vpcSEg 00:13:32.371 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:32.371 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71787 00:13:32.371 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:32.371 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:32.371 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71787 /var/tmp/bdevperf.sock 00:13:32.371 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71787 ']' 00:13:32.371 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:32.371 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:32.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:32.371 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:32.371 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:32.371 03:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:32.371 [2024-10-09 03:16:15.439140] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:13:32.371 [2024-10-09 03:16:15.439226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71787 ] 00:13:32.371 [2024-10-09 03:16:15.571676] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.630 [2024-10-09 03:16:15.697593] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:32.631 [2024-10-09 03:16:15.772343] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:33.199 03:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:33.199 03:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:33.199 03:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Kqg8vpcSEg 00:13:33.458 03:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:13:33.717 [2024-10-09 03:16:16.927976] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:33.717 [2024-10-09 03:16:16.933232] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:33.717 [2024-10-09 03:16:16.933294] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:33.717 [2024-10-09 03:16:16.933382] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:33.717 [2024-10-09 03:16:16.933925] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f090 (107): Transport endpoint is not connected 00:13:33.717 [2024-10-09 03:16:16.934911] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f090 (9): Bad file descriptor 00:13:33.717 [2024-10-09 03:16:16.935907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:33.717 [2024-10-09 03:16:16.935943] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:33.717 [2024-10-09 03:16:16.935953] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:13:33.717 [2024-10-09 03:16:16.935975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:33.717 request: 00:13:33.717 { 00:13:33.717 "name": "TLSTEST", 00:13:33.717 "trtype": "tcp", 00:13:33.717 "traddr": "10.0.0.3", 00:13:33.717 "adrfam": "ipv4", 00:13:33.717 "trsvcid": "4420", 00:13:33.717 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:33.717 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:13:33.717 "prchk_reftag": false, 00:13:33.717 "prchk_guard": false, 00:13:33.717 "hdgst": false, 00:13:33.717 "ddgst": false, 00:13:33.717 "psk": "key0", 00:13:33.717 "allow_unrecognized_csi": false, 00:13:33.717 "method": "bdev_nvme_attach_controller", 00:13:33.717 "req_id": 1 00:13:33.717 } 00:13:33.717 Got JSON-RPC error response 00:13:33.717 response: 00:13:33.717 { 00:13:33.717 "code": -5, 00:13:33.717 "message": "Input/output error" 00:13:33.717 } 00:13:33.717 03:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71787 00:13:33.717 03:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71787 ']' 00:13:33.717 03:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71787 00:13:33.717 03:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:33.717 03:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:33.717 03:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71787 00:13:33.717 killing process with pid 71787 00:13:33.717 Received shutdown signal, test time was about 10.000000 seconds 00:13:33.717 00:13:33.717 Latency(us) 00:13:33.717 [2024-10-09T03:16:17.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.718 [2024-10-09T03:16:17.021Z] =================================================================================================================== 00:13:33.718 [2024-10-09T03:16:17.021Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:33.718 03:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:33.718 03:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:33.718 03:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71787' 00:13:33.718 03:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71787 00:13:33.718 03:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71787 00:13:34.287 03:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:34.287 03:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:34.287 03:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:34.287 03:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:34.287 03:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:34.287 03:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Kqg8vpcSEg 00:13:34.287 03:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:34.287 03:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Kqg8vpcSEg 00:13:34.287 03:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:34.287 03:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:34.287 03:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:34.287 03:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:34.287 03:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Kqg8vpcSEg 00:13:34.287 03:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:34.287 03:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:13:34.287 03:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:34.287 03:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Kqg8vpcSEg 00:13:34.287 03:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:34.287 03:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:34.287 03:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71820 00:13:34.287 03:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:34.287 03:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71820 /var/tmp/bdevperf.sock 00:13:34.287 03:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71820 ']' 00:13:34.287 03:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:34.287 03:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:34.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:34.287 03:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:34.287 03:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:34.287 03:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:34.287 [2024-10-09 03:16:17.347861] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:13:34.287 [2024-10-09 03:16:17.347953] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71820 ] 00:13:34.287 [2024-10-09 03:16:17.482349] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.287 [2024-10-09 03:16:17.575933] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:34.547 [2024-10-09 03:16:17.653188] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:35.115 03:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:35.115 03:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:35.115 03:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Kqg8vpcSEg 00:13:35.374 03:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:35.634 [2024-10-09 03:16:18.858878] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:35.634 [2024-10-09 03:16:18.864097] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:35.634 [2024-10-09 03:16:18.864152] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:35.634 [2024-10-09 03:16:18.864244] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:35.634 [2024-10-09 03:16:18.864778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x845090 (107): Transport endpoint is not connected 00:13:35.634 [2024-10-09 03:16:18.865762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x845090 (9): Bad file descriptor 00:13:35.634 [2024-10-09 03:16:18.866757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:13:35.634 [2024-10-09 03:16:18.866795] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:35.634 [2024-10-09 03:16:18.866806] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:13:35.634 [2024-10-09 03:16:18.866828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:13:35.634 request: 00:13:35.634 { 00:13:35.634 "name": "TLSTEST", 00:13:35.634 "trtype": "tcp", 00:13:35.634 "traddr": "10.0.0.3", 00:13:35.634 "adrfam": "ipv4", 00:13:35.634 "trsvcid": "4420", 00:13:35.634 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:13:35.634 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:35.634 "prchk_reftag": false, 00:13:35.634 "prchk_guard": false, 00:13:35.634 "hdgst": false, 00:13:35.634 "ddgst": false, 00:13:35.634 "psk": "key0", 00:13:35.634 "allow_unrecognized_csi": false, 00:13:35.634 "method": "bdev_nvme_attach_controller", 00:13:35.634 "req_id": 1 00:13:35.634 } 00:13:35.634 Got JSON-RPC error response 00:13:35.634 response: 00:13:35.634 { 00:13:35.634 "code": -5, 00:13:35.634 "message": "Input/output error" 00:13:35.634 } 00:13:35.634 03:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71820 00:13:35.634 03:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71820 ']' 00:13:35.634 03:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71820 00:13:35.634 03:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:35.634 03:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:35.634 03:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71820 00:13:35.634 killing process with pid 71820 00:13:35.634 Received shutdown signal, test time was about 10.000000 seconds 00:13:35.634 00:13:35.634 Latency(us) 00:13:35.634 [2024-10-09T03:16:18.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.634 [2024-10-09T03:16:18.937Z] =================================================================================================================== 00:13:35.634 [2024-10-09T03:16:18.937Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:35.634 03:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:35.634 03:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:35.634 03:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71820' 00:13:35.634 03:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71820 00:13:35.634 03:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71820 00:13:36.222 03:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:36.222 03:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:36.222 03:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:36.222 03:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:36.222 03:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:36.222 03:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:36.222 03:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:36.222 03:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:36.222 03:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:36.222 03:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:36.222 03:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:36.222 03:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:36.222 03:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:36.222 03:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:36.222 03:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:36.222 03:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:36.222 03:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:13:36.222 03:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:36.222 03:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71850 00:13:36.222 03:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:36.222 03:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:36.222 03:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71850 /var/tmp/bdevperf.sock 00:13:36.222 03:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71850 ']' 00:13:36.222 03:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:36.222 03:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:36.222 03:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:36.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:36.222 03:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:36.222 03:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:36.222 [2024-10-09 03:16:19.286439] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:13:36.222 [2024-10-09 03:16:19.286539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71850 ] 00:13:36.222 [2024-10-09 03:16:19.423864] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.489 [2024-10-09 03:16:19.556335] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:36.489 [2024-10-09 03:16:19.630529] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:37.056 03:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:37.056 03:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:37.056 03:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:13:37.314 [2024-10-09 03:16:20.539867] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:13:37.314 [2024-10-09 03:16:20.539950] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:37.314 request: 00:13:37.314 { 00:13:37.314 "name": "key0", 00:13:37.314 "path": "", 00:13:37.314 "method": "keyring_file_add_key", 00:13:37.314 "req_id": 1 00:13:37.314 } 00:13:37.314 Got JSON-RPC error response 00:13:37.314 response: 00:13:37.314 { 00:13:37.314 "code": -1, 00:13:37.314 "message": "Operation not permitted" 00:13:37.314 } 00:13:37.314 03:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:37.573 [2024-10-09 03:16:20.836131] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:37.573 [2024-10-09 03:16:20.836239] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:13:37.573 request: 00:13:37.573 { 00:13:37.573 "name": "TLSTEST", 00:13:37.573 "trtype": "tcp", 00:13:37.573 "traddr": "10.0.0.3", 00:13:37.573 "adrfam": "ipv4", 00:13:37.573 "trsvcid": "4420", 00:13:37.573 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:37.573 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:37.573 "prchk_reftag": false, 00:13:37.573 "prchk_guard": false, 00:13:37.573 "hdgst": false, 00:13:37.573 "ddgst": false, 00:13:37.573 "psk": "key0", 00:13:37.573 "allow_unrecognized_csi": false, 00:13:37.573 "method": "bdev_nvme_attach_controller", 00:13:37.573 "req_id": 1 00:13:37.573 } 00:13:37.573 Got JSON-RPC error response 00:13:37.573 response: 00:13:37.573 { 00:13:37.573 "code": -126, 00:13:37.573 "message": "Required key not available" 00:13:37.573 } 00:13:37.573 03:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71850 00:13:37.573 03:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71850 ']' 00:13:37.573 03:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71850 00:13:37.573 03:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:37.573 03:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:37.573 03:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71850 00:13:37.832 killing process with pid 71850 00:13:37.832 Received shutdown signal, test time was about 10.000000 seconds 00:13:37.832 00:13:37.832 Latency(us) 00:13:37.832 [2024-10-09T03:16:21.135Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.832 [2024-10-09T03:16:21.135Z] =================================================================================================================== 00:13:37.832 [2024-10-09T03:16:21.135Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:37.832 03:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:37.832 03:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:37.832 03:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71850' 00:13:37.832 03:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71850 00:13:37.832 03:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71850 00:13:38.091 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:38.091 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:38.091 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:38.091 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:38.091 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:38.091 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71393 00:13:38.091 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71393 ']' 00:13:38.091 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71393 00:13:38.091 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:38.091 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:38.091 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71393 00:13:38.091 killing process with pid 71393 00:13:38.091 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:38.091 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:38.091 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71393' 00:13:38.091 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71393 00:13:38.091 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71393 00:13:38.350 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:13:38.350 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:13:38.350 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:13:38.350 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:13:38.350 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:13:38.350 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:13:38.350 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:13:38.350 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:38.350 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:13:38.350 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.KoRS3lc0K0 00:13:38.350 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:38.350 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.KoRS3lc0K0 00:13:38.350 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:13:38.350 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:38.350 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:38.350 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:38.350 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=71894 00:13:38.350 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:38.350 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 71894 00:13:38.350 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71894 ']' 00:13:38.350 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.350 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:38.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.350 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.350 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:38.350 03:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:38.350 [2024-10-09 03:16:21.634148] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:13:38.350 [2024-10-09 03:16:21.634255] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.610 [2024-10-09 03:16:21.775580] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.610 [2024-10-09 03:16:21.889322] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:38.610 [2024-10-09 03:16:21.889394] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:38.610 [2024-10-09 03:16:21.889422] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:38.610 [2024-10-09 03:16:21.889430] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:38.610 [2024-10-09 03:16:21.889437] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:38.610 [2024-10-09 03:16:21.889847] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.868 [2024-10-09 03:16:21.949695] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:39.436 03:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:39.436 03:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:39.436 03:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:39.436 03:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:39.436 03:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:39.436 03:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.436 03:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.KoRS3lc0K0 00:13:39.436 03:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.KoRS3lc0K0 00:13:39.436 03:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:39.695 [2024-10-09 03:16:22.943412] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:39.695 03:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:39.954 03:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:40.213 [2024-10-09 03:16:23.447620] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:40.213 [2024-10-09 03:16:23.447886] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:40.213 03:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:40.472 malloc0 00:13:40.472 03:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:40.731 03:16:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.KoRS3lc0K0 00:13:40.990 03:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:41.249 03:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KoRS3lc0K0 00:13:41.249 03:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:41.249 03:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:41.249 03:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:41.249 03:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.KoRS3lc0K0 00:13:41.249 03:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:41.249 03:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71955 00:13:41.249 03:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:41.249 03:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:41.249 03:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71955 /var/tmp/bdevperf.sock 00:13:41.249 03:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71955 ']' 00:13:41.249 03:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:41.249 03:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:41.249 03:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:41.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:41.249 03:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:41.249 03:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:41.249 [2024-10-09 03:16:24.479534] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:13:41.249 [2024-10-09 03:16:24.480045] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71955 ] 00:13:41.507 [2024-10-09 03:16:24.613626] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.507 [2024-10-09 03:16:24.735941] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.766 [2024-10-09 03:16:24.812618] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:42.334 03:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:42.334 03:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:42.334 03:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KoRS3lc0K0 00:13:42.593 03:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:42.859 [2024-10-09 03:16:26.011324] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:42.859 TLSTESTn1 00:13:42.859 03:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:43.117 Running I/O for 10 seconds... 00:13:44.992 3840.00 IOPS, 15.00 MiB/s [2024-10-09T03:16:29.672Z] 3872.50 IOPS, 15.13 MiB/s [2024-10-09T03:16:30.608Z] 3884.33 IOPS, 15.17 MiB/s [2024-10-09T03:16:31.546Z] 3897.00 IOPS, 15.22 MiB/s [2024-10-09T03:16:32.483Z] 3914.60 IOPS, 15.29 MiB/s [2024-10-09T03:16:33.420Z] 3938.67 IOPS, 15.39 MiB/s [2024-10-09T03:16:34.356Z] 3951.29 IOPS, 15.43 MiB/s [2024-10-09T03:16:35.294Z] 3971.38 IOPS, 15.51 MiB/s [2024-10-09T03:16:36.670Z] 3974.44 IOPS, 15.53 MiB/s [2024-10-09T03:16:36.670Z] 3979.20 IOPS, 15.54 MiB/s 00:13:53.367 Latency(us) 00:13:53.367 [2024-10-09T03:16:36.670Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.367 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:53.367 Verification LBA range: start 0x0 length 0x2000 00:13:53.367 TLSTESTn1 : 10.02 3984.61 15.56 0.00 0.00 32064.77 6523.81 26333.56 00:13:53.367 [2024-10-09T03:16:36.670Z] =================================================================================================================== 00:13:53.367 [2024-10-09T03:16:36.670Z] Total : 3984.61 15.56 0.00 0.00 32064.77 6523.81 26333.56 00:13:53.367 { 00:13:53.367 "results": [ 00:13:53.367 { 00:13:53.367 "job": "TLSTESTn1", 00:13:53.367 "core_mask": "0x4", 00:13:53.367 "workload": "verify", 00:13:53.367 "status": "finished", 00:13:53.367 "verify_range": { 00:13:53.368 "start": 0, 00:13:53.368 "length": 8192 00:13:53.368 }, 00:13:53.368 "queue_depth": 128, 00:13:53.368 "io_size": 4096, 00:13:53.368 "runtime": 10.018536, 00:13:53.368 "iops": 3984.614119268524, 00:13:53.368 "mibps": 15.564898903392672, 00:13:53.368 "io_failed": 0, 00:13:53.368 "io_timeout": 0, 00:13:53.368 "avg_latency_us": 32064.76664747677, 00:13:53.368 "min_latency_us": 6523.810909090909, 00:13:53.368 "max_latency_us": 26333.556363636362 00:13:53.368 } 00:13:53.368 ], 00:13:53.368 "core_count": 1 00:13:53.368 } 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71955 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71955 ']' 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71955 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71955 00:13:53.368 killing process with pid 71955 00:13:53.368 Received shutdown signal, test time was about 10.000000 seconds 00:13:53.368 00:13:53.368 Latency(us) 00:13:53.368 [2024-10-09T03:16:36.671Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.368 [2024-10-09T03:16:36.671Z] =================================================================================================================== 00:13:53.368 [2024-10-09T03:16:36.671Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71955' 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71955 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71955 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.KoRS3lc0K0 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KoRS3lc0K0 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KoRS3lc0K0 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KoRS3lc0K0 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.KoRS3lc0K0 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72091 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72091 /var/tmp/bdevperf.sock 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72091 ']' 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:53.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:53.368 03:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:53.627 [2024-10-09 03:16:36.707276] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:13:53.627 [2024-10-09 03:16:36.707602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72091 ] 00:13:53.627 [2024-10-09 03:16:36.843861] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.886 [2024-10-09 03:16:36.991893] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:53.886 [2024-10-09 03:16:37.069848] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:54.454 03:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:54.454 03:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:54.454 03:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KoRS3lc0K0 00:13:55.020 [2024-10-09 03:16:38.058656] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.KoRS3lc0K0': 0100666 00:13:55.020 [2024-10-09 03:16:38.059079] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:55.020 request: 00:13:55.020 { 00:13:55.020 "name": "key0", 00:13:55.020 "path": "/tmp/tmp.KoRS3lc0K0", 00:13:55.020 "method": "keyring_file_add_key", 00:13:55.020 "req_id": 1 00:13:55.020 } 00:13:55.020 Got JSON-RPC error response 00:13:55.020 response: 00:13:55.020 { 00:13:55.020 "code": -1, 00:13:55.020 "message": "Operation not permitted" 00:13:55.020 } 00:13:55.020 03:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:55.279 [2024-10-09 03:16:38.366885] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:55.279 [2024-10-09 03:16:38.367363] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:13:55.279 request: 00:13:55.279 { 00:13:55.279 "name": "TLSTEST", 00:13:55.279 "trtype": "tcp", 00:13:55.279 "traddr": "10.0.0.3", 00:13:55.279 "adrfam": "ipv4", 00:13:55.279 "trsvcid": "4420", 00:13:55.279 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:55.279 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:55.279 "prchk_reftag": false, 00:13:55.279 "prchk_guard": false, 00:13:55.279 "hdgst": false, 00:13:55.279 "ddgst": false, 00:13:55.279 "psk": "key0", 00:13:55.279 "allow_unrecognized_csi": false, 00:13:55.279 "method": "bdev_nvme_attach_controller", 00:13:55.279 "req_id": 1 00:13:55.279 } 00:13:55.279 Got JSON-RPC error response 00:13:55.279 response: 00:13:55.279 { 00:13:55.279 "code": -126, 00:13:55.279 "message": "Required key not available" 00:13:55.279 } 00:13:55.279 03:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72091 00:13:55.279 03:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72091 ']' 00:13:55.279 03:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72091 00:13:55.279 03:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:55.279 03:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:55.279 03:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72091 00:13:55.279 killing process with pid 72091 00:13:55.279 Received shutdown signal, test time was about 10.000000 seconds 00:13:55.279 00:13:55.279 Latency(us) 00:13:55.279 [2024-10-09T03:16:38.582Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:55.279 [2024-10-09T03:16:38.582Z] =================================================================================================================== 00:13:55.279 [2024-10-09T03:16:38.582Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:55.279 03:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:55.279 03:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:55.279 03:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72091' 00:13:55.279 03:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72091 00:13:55.279 03:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72091 00:13:55.538 03:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:55.538 03:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:55.538 03:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:55.538 03:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:55.538 03:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:55.538 03:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 71894 00:13:55.538 03:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71894 ']' 00:13:55.538 03:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71894 00:13:55.538 03:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:55.538 03:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:55.538 03:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71894 00:13:55.538 killing process with pid 71894 00:13:55.538 03:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:55.538 03:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:55.538 03:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71894' 00:13:55.538 03:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71894 00:13:55.538 03:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71894 00:13:55.797 03:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:13:55.797 03:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:55.797 03:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:55.797 03:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:55.797 03:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72130 00:13:55.797 03:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:55.797 03:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72130 00:13:55.797 03:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72130 ']' 00:13:55.797 03:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.797 03:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:55.797 03:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.797 03:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:55.797 03:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:56.057 [2024-10-09 03:16:39.114135] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:13:56.057 [2024-10-09 03:16:39.114243] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.057 [2024-10-09 03:16:39.254970] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.315 [2024-10-09 03:16:39.380070] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.315 [2024-10-09 03:16:39.380158] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.315 [2024-10-09 03:16:39.380182] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:56.315 [2024-10-09 03:16:39.380190] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:56.315 [2024-10-09 03:16:39.380198] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.315 [2024-10-09 03:16:39.380635] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.315 [2024-10-09 03:16:39.442006] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:56.884 03:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:56.884 03:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:56.884 03:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:56.885 03:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:56.885 03:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:56.885 03:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.885 03:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.KoRS3lc0K0 00:13:56.885 03:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:56.885 03:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.KoRS3lc0K0 00:13:56.885 03:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:13:56.885 03:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:56.885 03:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:13:56.885 03:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:56.885 03:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.KoRS3lc0K0 00:13:56.885 03:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.KoRS3lc0K0 00:13:56.885 03:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:57.144 [2024-10-09 03:16:40.394305] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:57.144 03:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:57.712 03:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:57.712 [2024-10-09 03:16:40.966471] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:57.712 [2024-10-09 03:16:40.967053] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:57.712 03:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:57.971 malloc0 00:13:57.971 03:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:58.574 03:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.KoRS3lc0K0 00:13:58.574 [2024-10-09 03:16:41.812505] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.KoRS3lc0K0': 0100666 00:13:58.574 [2024-10-09 03:16:41.812589] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:58.574 request: 00:13:58.574 { 00:13:58.574 "name": "key0", 00:13:58.574 "path": "/tmp/tmp.KoRS3lc0K0", 00:13:58.574 "method": "keyring_file_add_key", 00:13:58.574 "req_id": 1 00:13:58.574 } 00:13:58.574 Got JSON-RPC error response 00:13:58.574 response: 00:13:58.574 { 00:13:58.574 "code": -1, 00:13:58.574 "message": "Operation not permitted" 00:13:58.574 } 00:13:58.574 03:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:58.841 [2024-10-09 03:16:42.072857] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:13:58.841 [2024-10-09 03:16:42.072946] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:13:58.841 request: 00:13:58.841 { 00:13:58.841 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:58.841 "host": "nqn.2016-06.io.spdk:host1", 00:13:58.841 "psk": "key0", 00:13:58.841 "method": "nvmf_subsystem_add_host", 00:13:58.841 "req_id": 1 00:13:58.841 } 00:13:58.841 Got JSON-RPC error response 00:13:58.841 response: 00:13:58.841 { 00:13:58.841 "code": -32603, 00:13:58.841 "message": "Internal error" 00:13:58.841 } 00:13:58.841 03:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:58.841 03:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:58.841 03:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:58.841 03:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:58.841 03:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 72130 00:13:58.841 03:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72130 ']' 00:13:58.841 03:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72130 00:13:58.841 03:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:58.841 03:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:58.841 03:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72130 00:13:58.841 killing process with pid 72130 00:13:58.841 03:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:58.841 03:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:58.841 03:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72130' 00:13:58.841 03:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72130 00:13:58.841 03:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72130 00:13:59.100 03:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.KoRS3lc0K0 00:13:59.100 03:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:13:59.100 03:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:59.100 03:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:59.100 03:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:59.360 03:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:59.360 03:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72199 00:13:59.360 03:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72199 00:13:59.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.360 03:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72199 ']' 00:13:59.360 03:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.360 03:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:59.360 03:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.360 03:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:59.360 03:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:59.360 [2024-10-09 03:16:42.470789] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:13:59.360 [2024-10-09 03:16:42.471275] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:59.360 [2024-10-09 03:16:42.606975] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.619 [2024-10-09 03:16:42.732779] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:59.619 [2024-10-09 03:16:42.732832] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:59.619 [2024-10-09 03:16:42.732860] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:59.619 [2024-10-09 03:16:42.732868] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:59.619 [2024-10-09 03:16:42.732875] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:59.619 [2024-10-09 03:16:42.733331] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.619 [2024-10-09 03:16:42.795227] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:00.556 03:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:00.556 03:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:00.556 03:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:00.556 03:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:00.556 03:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:00.556 03:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:00.556 03:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.KoRS3lc0K0 00:14:00.556 03:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.KoRS3lc0K0 00:14:00.556 03:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:00.815 [2024-10-09 03:16:43.885227] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:00.815 03:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:01.074 03:16:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:01.334 [2024-10-09 03:16:44.525448] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:01.334 [2024-10-09 03:16:44.525847] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:01.334 03:16:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:01.593 malloc0 00:14:01.593 03:16:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:01.852 03:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.KoRS3lc0K0 00:14:02.112 03:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:02.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:02.371 03:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=72260 00:14:02.371 03:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:02.371 03:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:02.371 03:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 72260 /var/tmp/bdevperf.sock 00:14:02.371 03:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72260 ']' 00:14:02.371 03:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:02.371 03:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:02.371 03:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:02.371 03:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:02.371 03:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:02.630 [2024-10-09 03:16:45.690906] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:14:02.630 [2024-10-09 03:16:45.691298] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72260 ] 00:14:02.630 [2024-10-09 03:16:45.823993] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.889 [2024-10-09 03:16:45.953651] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.889 [2024-10-09 03:16:46.011657] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:03.826 03:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:03.826 03:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:03.827 03:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KoRS3lc0K0 00:14:03.827 03:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:04.086 [2024-10-09 03:16:47.350268] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:04.345 TLSTESTn1 00:14:04.345 03:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:04.604 03:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:14:04.604 "subsystems": [ 00:14:04.604 { 00:14:04.604 "subsystem": "keyring", 00:14:04.604 "config": [ 00:14:04.604 { 00:14:04.604 "method": "keyring_file_add_key", 00:14:04.604 "params": { 00:14:04.604 "name": "key0", 00:14:04.604 "path": "/tmp/tmp.KoRS3lc0K0" 00:14:04.605 } 00:14:04.605 } 00:14:04.605 ] 00:14:04.605 }, 00:14:04.605 { 00:14:04.605 "subsystem": "iobuf", 00:14:04.605 "config": [ 00:14:04.605 { 00:14:04.605 "method": "iobuf_set_options", 00:14:04.605 "params": { 00:14:04.605 "small_pool_count": 8192, 00:14:04.605 "large_pool_count": 1024, 00:14:04.605 "small_bufsize": 8192, 00:14:04.605 "large_bufsize": 135168 00:14:04.605 } 00:14:04.605 } 00:14:04.605 ] 00:14:04.605 }, 00:14:04.605 { 00:14:04.605 "subsystem": "sock", 00:14:04.605 "config": [ 00:14:04.605 { 00:14:04.605 "method": "sock_set_default_impl", 00:14:04.605 "params": { 00:14:04.605 "impl_name": "uring" 00:14:04.605 } 00:14:04.605 }, 00:14:04.605 { 00:14:04.605 "method": "sock_impl_set_options", 00:14:04.605 "params": { 00:14:04.605 "impl_name": "ssl", 00:14:04.605 "recv_buf_size": 4096, 00:14:04.605 "send_buf_size": 4096, 00:14:04.605 "enable_recv_pipe": true, 00:14:04.605 "enable_quickack": false, 00:14:04.605 "enable_placement_id": 0, 00:14:04.605 "enable_zerocopy_send_server": true, 00:14:04.605 "enable_zerocopy_send_client": false, 00:14:04.605 "zerocopy_threshold": 0, 00:14:04.605 "tls_version": 0, 00:14:04.605 "enable_ktls": false 00:14:04.605 } 00:14:04.605 }, 00:14:04.605 { 00:14:04.605 "method": "sock_impl_set_options", 00:14:04.605 "params": { 00:14:04.605 "impl_name": "posix", 00:14:04.605 "recv_buf_size": 2097152, 00:14:04.605 "send_buf_size": 2097152, 00:14:04.605 "enable_recv_pipe": true, 00:14:04.605 "enable_quickack": false, 00:14:04.605 "enable_placement_id": 0, 00:14:04.605 "enable_zerocopy_send_server": true, 00:14:04.605 "enable_zerocopy_send_client": false, 00:14:04.605 "zerocopy_threshold": 0, 00:14:04.605 "tls_version": 0, 00:14:04.605 "enable_ktls": false 00:14:04.605 } 00:14:04.605 }, 00:14:04.605 { 00:14:04.605 "method": "sock_impl_set_options", 00:14:04.605 "params": { 00:14:04.605 "impl_name": "uring", 00:14:04.605 "recv_buf_size": 2097152, 00:14:04.605 "send_buf_size": 2097152, 00:14:04.605 "enable_recv_pipe": true, 00:14:04.605 "enable_quickack": false, 00:14:04.605 "enable_placement_id": 0, 00:14:04.605 "enable_zerocopy_send_server": false, 00:14:04.605 "enable_zerocopy_send_client": false, 00:14:04.605 "zerocopy_threshold": 0, 00:14:04.605 "tls_version": 0, 00:14:04.605 "enable_ktls": false 00:14:04.605 } 00:14:04.605 } 00:14:04.605 ] 00:14:04.605 }, 00:14:04.605 { 00:14:04.605 "subsystem": "vmd", 00:14:04.605 "config": [] 00:14:04.605 }, 00:14:04.605 { 00:14:04.605 "subsystem": "accel", 00:14:04.605 "config": [ 00:14:04.605 { 00:14:04.605 "method": "accel_set_options", 00:14:04.605 "params": { 00:14:04.605 "small_cache_size": 128, 00:14:04.605 "large_cache_size": 16, 00:14:04.605 "task_count": 2048, 00:14:04.605 "sequence_count": 2048, 00:14:04.605 "buf_count": 2048 00:14:04.605 } 00:14:04.605 } 00:14:04.605 ] 00:14:04.605 }, 00:14:04.605 { 00:14:04.605 "subsystem": "bdev", 00:14:04.605 "config": [ 00:14:04.605 { 00:14:04.605 "method": "bdev_set_options", 00:14:04.605 "params": { 00:14:04.605 "bdev_io_pool_size": 65535, 00:14:04.605 "bdev_io_cache_size": 256, 00:14:04.605 "bdev_auto_examine": true, 00:14:04.605 "iobuf_small_cache_size": 128, 00:14:04.605 "iobuf_large_cache_size": 16 00:14:04.605 } 00:14:04.605 }, 00:14:04.605 { 00:14:04.605 "method": "bdev_raid_set_options", 00:14:04.605 "params": { 00:14:04.605 "process_window_size_kb": 1024, 00:14:04.605 "process_max_bandwidth_mb_sec": 0 00:14:04.605 } 00:14:04.605 }, 00:14:04.605 { 00:14:04.605 "method": "bdev_iscsi_set_options", 00:14:04.605 "params": { 00:14:04.605 "timeout_sec": 30 00:14:04.605 } 00:14:04.605 }, 00:14:04.605 { 00:14:04.605 "method": "bdev_nvme_set_options", 00:14:04.605 "params": { 00:14:04.605 "action_on_timeout": "none", 00:14:04.605 "timeout_us": 0, 00:14:04.605 "timeout_admin_us": 0, 00:14:04.605 "keep_alive_timeout_ms": 10000, 00:14:04.605 "arbitration_burst": 0, 00:14:04.605 "low_priority_weight": 0, 00:14:04.605 "medium_priority_weight": 0, 00:14:04.605 "high_priority_weight": 0, 00:14:04.605 "nvme_adminq_poll_period_us": 10000, 00:14:04.605 "nvme_ioq_poll_period_us": 0, 00:14:04.605 "io_queue_requests": 0, 00:14:04.605 "delay_cmd_submit": true, 00:14:04.605 "transport_retry_count": 4, 00:14:04.605 "bdev_retry_count": 3, 00:14:04.605 "transport_ack_timeout": 0, 00:14:04.605 "ctrlr_loss_timeout_sec": 0, 00:14:04.605 "reconnect_delay_sec": 0, 00:14:04.605 "fast_io_fail_timeout_sec": 0, 00:14:04.605 "disable_auto_failback": false, 00:14:04.605 "generate_uuids": false, 00:14:04.605 "transport_tos": 0, 00:14:04.605 "nvme_error_stat": false, 00:14:04.605 "rdma_srq_size": 0, 00:14:04.605 "io_path_stat": false, 00:14:04.605 "allow_accel_sequence": false, 00:14:04.605 "rdma_max_cq_size": 0, 00:14:04.605 "rdma_cm_event_timeout_ms": 0, 00:14:04.605 "dhchap_digests": [ 00:14:04.605 "sha256", 00:14:04.605 "sha384", 00:14:04.605 "sha512" 00:14:04.605 ], 00:14:04.605 "dhchap_dhgroups": [ 00:14:04.605 "null", 00:14:04.605 "ffdhe2048", 00:14:04.605 "ffdhe3072", 00:14:04.605 "ffdhe4096", 00:14:04.605 "ffdhe6144", 00:14:04.605 "ffdhe8192" 00:14:04.605 ] 00:14:04.605 } 00:14:04.605 }, 00:14:04.605 { 00:14:04.605 "method": "bdev_nvme_set_hotplug", 00:14:04.605 "params": { 00:14:04.605 "period_us": 100000, 00:14:04.605 "enable": false 00:14:04.605 } 00:14:04.605 }, 00:14:04.605 { 00:14:04.605 "method": "bdev_malloc_create", 00:14:04.605 "params": { 00:14:04.605 "name": "malloc0", 00:14:04.605 "num_blocks": 8192, 00:14:04.605 "block_size": 4096, 00:14:04.605 "physical_block_size": 4096, 00:14:04.605 "uuid": "af522689-502f-4add-a28c-96b65eafe651", 00:14:04.605 "optimal_io_boundary": 0, 00:14:04.605 "md_size": 0, 00:14:04.605 "dif_type": 0, 00:14:04.605 "dif_is_head_of_md": false, 00:14:04.605 "dif_pi_format": 0 00:14:04.605 } 00:14:04.605 }, 00:14:04.605 { 00:14:04.605 "method": "bdev_wait_for_examine" 00:14:04.605 } 00:14:04.605 ] 00:14:04.605 }, 00:14:04.605 { 00:14:04.605 "subsystem": "nbd", 00:14:04.605 "config": [] 00:14:04.605 }, 00:14:04.605 { 00:14:04.605 "subsystem": "scheduler", 00:14:04.605 "config": [ 00:14:04.605 { 00:14:04.605 "method": "framework_set_scheduler", 00:14:04.605 "params": { 00:14:04.605 "name": "static" 00:14:04.605 } 00:14:04.605 } 00:14:04.605 ] 00:14:04.605 }, 00:14:04.605 { 00:14:04.605 "subsystem": "nvmf", 00:14:04.605 "config": [ 00:14:04.605 { 00:14:04.605 "method": "nvmf_set_config", 00:14:04.605 "params": { 00:14:04.605 "discovery_filter": "match_any", 00:14:04.605 "admin_cmd_passthru": { 00:14:04.605 "identify_ctrlr": false 00:14:04.605 }, 00:14:04.605 "dhchap_digests": [ 00:14:04.605 "sha256", 00:14:04.605 "sha384", 00:14:04.605 "sha512" 00:14:04.605 ], 00:14:04.605 "dhchap_dhgroups": [ 00:14:04.605 "null", 00:14:04.605 "ffdhe2048", 00:14:04.605 "ffdhe3072", 00:14:04.605 "ffdhe4096", 00:14:04.605 "ffdhe6144", 00:14:04.605 "ffdhe8192" 00:14:04.605 ] 00:14:04.605 } 00:14:04.605 }, 00:14:04.605 { 00:14:04.605 "method": "nvmf_set_max_subsystems", 00:14:04.605 "params": { 00:14:04.605 "max_subsystems": 1024 00:14:04.605 } 00:14:04.605 }, 00:14:04.605 { 00:14:04.605 "method": "nvmf_set_crdt", 00:14:04.605 "params": { 00:14:04.605 "crdt1": 0, 00:14:04.605 "crdt2": 0, 00:14:04.605 "crdt3": 0 00:14:04.605 } 00:14:04.605 }, 00:14:04.605 { 00:14:04.605 "method": "nvmf_create_transport", 00:14:04.605 "params": { 00:14:04.605 "trtype": "TCP", 00:14:04.605 "max_queue_depth": 128, 00:14:04.605 "max_io_qpairs_per_ctrlr": 127, 00:14:04.605 "in_capsule_data_size": 4096, 00:14:04.605 "max_io_size": 131072, 00:14:04.605 "io_unit_size": 131072, 00:14:04.606 "max_aq_depth": 128, 00:14:04.606 "num_shared_buffers": 511, 00:14:04.606 "buf_cache_size": 4294967295, 00:14:04.606 "dif_insert_or_strip": false, 00:14:04.606 "zcopy": false, 00:14:04.606 "c2h_success": false, 00:14:04.606 "sock_priority": 0, 00:14:04.606 "abort_timeout_sec": 1, 00:14:04.606 "ack_timeout": 0, 00:14:04.606 "data_wr_pool_size": 0 00:14:04.606 } 00:14:04.606 }, 00:14:04.606 { 00:14:04.606 "method": "nvmf_create_subsystem", 00:14:04.606 "params": { 00:14:04.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:04.606 "allow_any_host": false, 00:14:04.606 "serial_number": "SPDK00000000000001", 00:14:04.606 "model_number": "SPDK bdev Controller", 00:14:04.606 "max_namespaces": 10, 00:14:04.606 "min_cntlid": 1, 00:14:04.606 "max_cntlid": 65519, 00:14:04.606 "ana_reporting": false 00:14:04.606 } 00:14:04.606 }, 00:14:04.606 { 00:14:04.606 "method": "nvmf_subsystem_add_host", 00:14:04.606 "params": { 00:14:04.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:04.606 "host": "nqn.2016-06.io.spdk:host1", 00:14:04.606 "psk": "key0" 00:14:04.606 } 00:14:04.606 }, 00:14:04.606 { 00:14:04.606 "method": "nvmf_subsystem_add_ns", 00:14:04.606 "params": { 00:14:04.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:04.606 "namespace": { 00:14:04.606 "nsid": 1, 00:14:04.606 "bdev_name": "malloc0", 00:14:04.606 "nguid": "AF522689502F4ADDA28C96B65EAFE651", 00:14:04.606 "uuid": "af522689-502f-4add-a28c-96b65eafe651", 00:14:04.606 "no_auto_visible": false 00:14:04.606 } 00:14:04.606 } 00:14:04.606 }, 00:14:04.606 { 00:14:04.606 "method": "nvmf_subsystem_add_listener", 00:14:04.606 "params": { 00:14:04.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:04.606 "listen_address": { 00:14:04.606 "trtype": "TCP", 00:14:04.606 "adrfam": "IPv4", 00:14:04.606 "traddr": "10.0.0.3", 00:14:04.606 "trsvcid": "4420" 00:14:04.606 }, 00:14:04.606 "secure_channel": true 00:14:04.606 } 00:14:04.606 } 00:14:04.606 ] 00:14:04.606 } 00:14:04.606 ] 00:14:04.606 }' 00:14:04.606 03:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:04.865 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:14:04.865 "subsystems": [ 00:14:04.865 { 00:14:04.865 "subsystem": "keyring", 00:14:04.865 "config": [ 00:14:04.865 { 00:14:04.865 "method": "keyring_file_add_key", 00:14:04.865 "params": { 00:14:04.865 "name": "key0", 00:14:04.865 "path": "/tmp/tmp.KoRS3lc0K0" 00:14:04.865 } 00:14:04.865 } 00:14:04.865 ] 00:14:04.865 }, 00:14:04.865 { 00:14:04.865 "subsystem": "iobuf", 00:14:04.865 "config": [ 00:14:04.865 { 00:14:04.866 "method": "iobuf_set_options", 00:14:04.866 "params": { 00:14:04.866 "small_pool_count": 8192, 00:14:04.866 "large_pool_count": 1024, 00:14:04.866 "small_bufsize": 8192, 00:14:04.866 "large_bufsize": 135168 00:14:04.866 } 00:14:04.866 } 00:14:04.866 ] 00:14:04.866 }, 00:14:04.866 { 00:14:04.866 "subsystem": "sock", 00:14:04.866 "config": [ 00:14:04.866 { 00:14:04.866 "method": "sock_set_default_impl", 00:14:04.866 "params": { 00:14:04.866 "impl_name": "uring" 00:14:04.866 } 00:14:04.866 }, 00:14:04.866 { 00:14:04.866 "method": "sock_impl_set_options", 00:14:04.866 "params": { 00:14:04.866 "impl_name": "ssl", 00:14:04.866 "recv_buf_size": 4096, 00:14:04.866 "send_buf_size": 4096, 00:14:04.866 "enable_recv_pipe": true, 00:14:04.866 "enable_quickack": false, 00:14:04.866 "enable_placement_id": 0, 00:14:04.866 "enable_zerocopy_send_server": true, 00:14:04.866 "enable_zerocopy_send_client": false, 00:14:04.866 "zerocopy_threshold": 0, 00:14:04.866 "tls_version": 0, 00:14:04.866 "enable_ktls": false 00:14:04.866 } 00:14:04.866 }, 00:14:04.866 { 00:14:04.866 "method": "sock_impl_set_options", 00:14:04.866 "params": { 00:14:04.866 "impl_name": "posix", 00:14:04.866 "recv_buf_size": 2097152, 00:14:04.866 "send_buf_size": 2097152, 00:14:04.866 "enable_recv_pipe": true, 00:14:04.866 "enable_quickack": false, 00:14:04.866 "enable_placement_id": 0, 00:14:04.866 "enable_zerocopy_send_server": true, 00:14:04.866 "enable_zerocopy_send_client": false, 00:14:04.866 "zerocopy_threshold": 0, 00:14:04.866 "tls_version": 0, 00:14:04.866 "enable_ktls": false 00:14:04.866 } 00:14:04.866 }, 00:14:04.866 { 00:14:04.866 "method": "sock_impl_set_options", 00:14:04.866 "params": { 00:14:04.866 "impl_name": "uring", 00:14:04.866 "recv_buf_size": 2097152, 00:14:04.866 "send_buf_size": 2097152, 00:14:04.866 "enable_recv_pipe": true, 00:14:04.866 "enable_quickack": false, 00:14:04.866 "enable_placement_id": 0, 00:14:04.866 "enable_zerocopy_send_server": false, 00:14:04.866 "enable_zerocopy_send_client": false, 00:14:04.866 "zerocopy_threshold": 0, 00:14:04.866 "tls_version": 0, 00:14:04.866 "enable_ktls": false 00:14:04.866 } 00:14:04.866 } 00:14:04.866 ] 00:14:04.866 }, 00:14:04.866 { 00:14:04.866 "subsystem": "vmd", 00:14:04.866 "config": [] 00:14:04.866 }, 00:14:04.866 { 00:14:04.866 "subsystem": "accel", 00:14:04.866 "config": [ 00:14:04.866 { 00:14:04.866 "method": "accel_set_options", 00:14:04.866 "params": { 00:14:04.866 "small_cache_size": 128, 00:14:04.866 "large_cache_size": 16, 00:14:04.866 "task_count": 2048, 00:14:04.866 "sequence_count": 2048, 00:14:04.866 "buf_count": 2048 00:14:04.866 } 00:14:04.866 } 00:14:04.866 ] 00:14:04.866 }, 00:14:04.866 { 00:14:04.866 "subsystem": "bdev", 00:14:04.866 "config": [ 00:14:04.866 { 00:14:04.866 "method": "bdev_set_options", 00:14:04.866 "params": { 00:14:04.866 "bdev_io_pool_size": 65535, 00:14:04.866 "bdev_io_cache_size": 256, 00:14:04.866 "bdev_auto_examine": true, 00:14:04.866 "iobuf_small_cache_size": 128, 00:14:04.866 "iobuf_large_cache_size": 16 00:14:04.866 } 00:14:04.866 }, 00:14:04.866 { 00:14:04.866 "method": "bdev_raid_set_options", 00:14:04.866 "params": { 00:14:04.866 "process_window_size_kb": 1024, 00:14:04.866 "process_max_bandwidth_mb_sec": 0 00:14:04.866 } 00:14:04.866 }, 00:14:04.866 { 00:14:04.866 "method": "bdev_iscsi_set_options", 00:14:04.866 "params": { 00:14:04.866 "timeout_sec": 30 00:14:04.866 } 00:14:04.866 }, 00:14:04.866 { 00:14:04.866 "method": "bdev_nvme_set_options", 00:14:04.866 "params": { 00:14:04.866 "action_on_timeout": "none", 00:14:04.866 "timeout_us": 0, 00:14:04.866 "timeout_admin_us": 0, 00:14:04.866 "keep_alive_timeout_ms": 10000, 00:14:04.866 "arbitration_burst": 0, 00:14:04.866 "low_priority_weight": 0, 00:14:04.866 "medium_priority_weight": 0, 00:14:04.866 "high_priority_weight": 0, 00:14:04.866 "nvme_adminq_poll_period_us": 10000, 00:14:04.866 "nvme_ioq_poll_period_us": 0, 00:14:04.866 "io_queue_requests": 512, 00:14:04.866 "delay_cmd_submit": true, 00:14:04.866 "transport_retry_count": 4, 00:14:04.866 "bdev_retry_count": 3, 00:14:04.866 "transport_ack_timeout": 0, 00:14:04.866 "ctrlr_loss_timeout_sec": 0, 00:14:04.866 "reconnect_delay_sec": 0, 00:14:04.866 "fast_io_fail_timeout_sec": 0, 00:14:04.866 "disable_auto_failback": false, 00:14:04.866 "generate_uuids": false, 00:14:04.866 "transport_tos": 0, 00:14:04.866 "nvme_error_stat": false, 00:14:04.866 "rdma_srq_size": 0, 00:14:04.866 "io_path_stat": false, 00:14:04.866 "allow_accel_sequence": false, 00:14:04.866 "rdma_max_cq_size": 0, 00:14:04.866 "rdma_cm_event_timeout_ms": 0, 00:14:04.866 "dhchap_digests": [ 00:14:04.866 "sha256", 00:14:04.866 "sha384", 00:14:04.866 "sha512" 00:14:04.866 ], 00:14:04.866 "dhchap_dhgroups": [ 00:14:04.866 "null", 00:14:04.866 "ffdhe2048", 00:14:04.866 "ffdhe3072", 00:14:04.866 "ffdhe4096", 00:14:04.866 "ffdhe6144", 00:14:04.866 "ffdhe8192" 00:14:04.866 ] 00:14:04.866 } 00:14:04.866 }, 00:14:04.866 { 00:14:04.866 "method": "bdev_nvme_attach_controller", 00:14:04.866 "params": { 00:14:04.866 "name": "TLSTEST", 00:14:04.866 "trtype": "TCP", 00:14:04.866 "adrfam": "IPv4", 00:14:04.866 "traddr": "10.0.0.3", 00:14:04.866 "trsvcid": "4420", 00:14:04.866 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:04.866 "prchk_reftag": false, 00:14:04.866 "prchk_guard": false, 00:14:04.866 "ctrlr_loss_timeout_sec": 0, 00:14:04.866 "reconnect_delay_sec": 0, 00:14:04.866 "fast_io_fail_timeout_sec": 0, 00:14:04.866 "psk": "key0", 00:14:04.866 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:04.866 "hdgst": false, 00:14:04.866 "ddgst": false, 00:14:04.866 "multipath": "multipath" 00:14:04.866 } 00:14:04.866 }, 00:14:04.866 { 00:14:04.866 "method": "bdev_nvme_set_hotplug", 00:14:04.866 "params": { 00:14:04.866 "period_us": 100000, 00:14:04.866 "enable": false 00:14:04.866 } 00:14:04.866 }, 00:14:04.866 { 00:14:04.866 "method": "bdev_wait_for_examine" 00:14:04.866 } 00:14:04.866 ] 00:14:04.866 }, 00:14:04.866 { 00:14:04.866 "subsystem": "nbd", 00:14:04.866 "config": [] 00:14:04.866 } 00:14:04.866 ] 00:14:04.866 }' 00:14:04.866 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 72260 00:14:04.866 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72260 ']' 00:14:04.866 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72260 00:14:04.866 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:04.866 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:04.866 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72260 00:14:05.125 killing process with pid 72260 00:14:05.125 Received shutdown signal, test time was about 10.000000 seconds 00:14:05.125 00:14:05.125 Latency(us) 00:14:05.125 [2024-10-09T03:16:48.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:05.125 [2024-10-09T03:16:48.428Z] =================================================================================================================== 00:14:05.125 [2024-10-09T03:16:48.428Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:05.125 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:05.125 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:05.125 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72260' 00:14:05.125 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72260 00:14:05.125 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72260 00:14:05.125 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 72199 00:14:05.125 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72199 ']' 00:14:05.125 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72199 00:14:05.125 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:05.126 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:05.126 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72199 00:14:05.385 killing process with pid 72199 00:14:05.385 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:05.385 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:05.385 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72199' 00:14:05.385 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72199 00:14:05.385 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72199 00:14:05.644 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:05.644 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:14:05.644 "subsystems": [ 00:14:05.644 { 00:14:05.644 "subsystem": "keyring", 00:14:05.644 "config": [ 00:14:05.644 { 00:14:05.644 "method": "keyring_file_add_key", 00:14:05.644 "params": { 00:14:05.644 "name": "key0", 00:14:05.644 "path": "/tmp/tmp.KoRS3lc0K0" 00:14:05.644 } 00:14:05.644 } 00:14:05.644 ] 00:14:05.644 }, 00:14:05.644 { 00:14:05.644 "subsystem": "iobuf", 00:14:05.644 "config": [ 00:14:05.644 { 00:14:05.644 "method": "iobuf_set_options", 00:14:05.644 "params": { 00:14:05.644 "small_pool_count": 8192, 00:14:05.644 "large_pool_count": 1024, 00:14:05.644 "small_bufsize": 8192, 00:14:05.644 "large_bufsize": 135168 00:14:05.644 } 00:14:05.644 } 00:14:05.644 ] 00:14:05.644 }, 00:14:05.644 { 00:14:05.644 "subsystem": "sock", 00:14:05.644 "config": [ 00:14:05.644 { 00:14:05.644 "method": "sock_set_default_impl", 00:14:05.644 "params": { 00:14:05.644 "impl_name": "uring" 00:14:05.644 } 00:14:05.644 }, 00:14:05.644 { 00:14:05.644 "method": "sock_impl_set_options", 00:14:05.644 "params": { 00:14:05.644 "impl_name": "ssl", 00:14:05.644 "recv_buf_size": 4096, 00:14:05.644 "send_buf_size": 4096, 00:14:05.644 "enable_recv_pipe": true, 00:14:05.644 "enable_quickack": false, 00:14:05.644 "enable_placement_id": 0, 00:14:05.644 "enable_zerocopy_send_server": true, 00:14:05.644 "enable_zerocopy_send_client": false, 00:14:05.644 "zerocopy_threshold": 0, 00:14:05.644 "tls_version": 0, 00:14:05.644 "enable_ktls": false 00:14:05.644 } 00:14:05.644 }, 00:14:05.644 { 00:14:05.644 "method": "sock_impl_set_options", 00:14:05.644 "params": { 00:14:05.644 "impl_name": "posix", 00:14:05.644 "recv_buf_size": 2097152, 00:14:05.644 "send_buf_size": 2097152, 00:14:05.644 "enable_recv_pipe": true, 00:14:05.644 "enable_quickack": false, 00:14:05.645 "enable_placement_id": 0, 00:14:05.645 "enable_zerocopy_send_server": true, 00:14:05.645 "enable_zerocopy_send_client": false, 00:14:05.645 "zerocopy_threshold": 0, 00:14:05.645 "tls_version": 0, 00:14:05.645 "enable_ktls": false 00:14:05.645 } 00:14:05.645 }, 00:14:05.645 { 00:14:05.645 "method": "sock_impl_set_options", 00:14:05.645 "params": { 00:14:05.645 "impl_name": "uring", 00:14:05.645 "recv_buf_size": 2097152, 00:14:05.645 "send_buf_size": 2097152, 00:14:05.645 "enable_recv_pipe": true, 00:14:05.645 "enable_quickack": false, 00:14:05.645 "enable_placement_id": 0, 00:14:05.645 "enable_zerocopy_send_server": false, 00:14:05.645 "enable_zerocopy_send_client": false, 00:14:05.645 "zerocopy_threshold": 0, 00:14:05.645 "tls_version": 0, 00:14:05.645 "enable_ktls": false 00:14:05.645 } 00:14:05.645 } 00:14:05.645 ] 00:14:05.645 }, 00:14:05.645 { 00:14:05.645 "subsystem": "vmd", 00:14:05.645 "config": [] 00:14:05.645 }, 00:14:05.645 { 00:14:05.645 "subsystem": "accel", 00:14:05.645 "config": [ 00:14:05.645 { 00:14:05.645 "method": "accel_set_options", 00:14:05.645 "params": { 00:14:05.645 "small_cache_size": 128, 00:14:05.645 "large_cache_size": 16, 00:14:05.645 "task_count": 2048, 00:14:05.645 "sequence_count": 2048, 00:14:05.645 "buf_count": 2048 00:14:05.645 } 00:14:05.645 } 00:14:05.645 ] 00:14:05.645 }, 00:14:05.645 { 00:14:05.645 "subsystem": "bdev", 00:14:05.645 "config": [ 00:14:05.645 { 00:14:05.645 "method": "bdev_set_options", 00:14:05.645 "params": { 00:14:05.645 "bdev_io_pool_size": 65535, 00:14:05.645 "bdev_io_cache_size": 256, 00:14:05.645 "bdev_auto_examine": true, 00:14:05.645 "iobuf_small_cache_size": 128, 00:14:05.645 "iobuf_large_cache_size": 16 00:14:05.645 } 00:14:05.645 }, 00:14:05.645 { 00:14:05.645 "method": "bdev_raid_set_options", 00:14:05.645 "params": { 00:14:05.645 "process_window_size_kb": 1024, 00:14:05.645 "process_max_bandwidth_mb_sec": 0 00:14:05.645 } 00:14:05.645 }, 00:14:05.645 { 00:14:05.645 "method": "bdev_iscsi_set_options", 00:14:05.645 "params": { 00:14:05.645 "timeout_sec": 30 00:14:05.645 } 00:14:05.645 }, 00:14:05.645 { 00:14:05.645 "method": "bdev_nvme_set_options", 00:14:05.645 "params": { 00:14:05.645 "action_on_timeout": "none", 00:14:05.645 "timeout_us": 0, 00:14:05.645 "timeout_admin_us": 0, 00:14:05.645 "keep_alive_timeout_ms": 10000, 00:14:05.645 "arbitration_burst": 0, 00:14:05.645 "low_priority_weight": 0, 00:14:05.645 "medium_priority_weight": 0, 00:14:05.645 "high_priority_weight": 0, 00:14:05.645 "nvme_adminq_poll_period_us": 10000, 00:14:05.645 "nvme_ioq_poll_period_us": 0, 00:14:05.645 "io_queue_requests": 0, 00:14:05.645 "delay_cmd_submit": true, 00:14:05.645 "transport_retry_count": 4, 00:14:05.645 "bdev_retry_count": 3, 00:14:05.645 "transport_ack_timeout": 0, 00:14:05.645 "ctrlr_loss_timeout_sec": 0, 00:14:05.645 "reconnect_delay_sec": 0, 00:14:05.645 "fast_io_fail_timeout_sec": 0, 00:14:05.645 "disable_auto_failback": false, 00:14:05.645 "generate_uuids": false, 00:14:05.645 "transport_tos": 0, 00:14:05.645 "nvme_error_stat": false, 00:14:05.645 "rdma_srq_size": 0, 00:14:05.645 "io_path_stat": false, 00:14:05.645 "allow_accel_sequence": false, 00:14:05.645 "rdma_max_cq_size": 0, 00:14:05.645 "rdma_cm_event_timeout_ms": 0, 00:14:05.645 "dhchap_digests": [ 00:14:05.645 "sha256", 00:14:05.645 "sha384", 00:14:05.645 "sha512" 00:14:05.645 ], 00:14:05.645 "dhchap_dhgroups": [ 00:14:05.645 "null", 00:14:05.645 "ffdhe2048", 00:14:05.645 "ffdhe3072", 00:14:05.645 "ffdhe4096", 00:14:05.645 "ffdhe6144", 00:14:05.645 "ffdhe8192" 00:14:05.645 ] 00:14:05.645 } 00:14:05.645 }, 00:14:05.645 { 00:14:05.645 "method": "bdev_nvme_set_hotplug", 00:14:05.645 "params": { 00:14:05.645 "period_us": 100000, 00:14:05.645 "enable": false 00:14:05.645 } 00:14:05.645 }, 00:14:05.645 { 00:14:05.645 "method": "bdev_malloc_create", 00:14:05.645 "params": { 00:14:05.645 "name": "malloc0", 00:14:05.645 "num_blocks": 8192, 00:14:05.645 "block_size": 4096, 00:14:05.645 "physical_block_size": 4096, 00:14:05.645 "uuid": "af522689-502f-4add-a28c-96b65eafe651", 00:14:05.645 "optimal_io_boundary": 0, 00:14:05.645 "md_size": 0, 00:14:05.645 "dif_type": 0, 00:14:05.645 "dif_is_head_of_md": false, 00:14:05.645 "dif_pi_format": 0 00:14:05.645 } 00:14:05.645 }, 00:14:05.645 { 00:14:05.645 "method": "bdev_wait_for_examine" 00:14:05.645 } 00:14:05.645 ] 00:14:05.645 }, 00:14:05.645 { 00:14:05.645 "subsystem": "nbd", 00:14:05.645 "config": [] 00:14:05.645 }, 00:14:05.645 { 00:14:05.645 "subsystem": "scheduler", 00:14:05.645 "config": [ 00:14:05.645 { 00:14:05.645 "method": "framework_set_scheduler", 00:14:05.645 "params": { 00:14:05.645 "name": "static" 00:14:05.645 } 00:14:05.645 } 00:14:05.645 ] 00:14:05.645 }, 00:14:05.645 { 00:14:05.645 "subsystem": "nvmf", 00:14:05.645 "config": [ 00:14:05.645 { 00:14:05.645 "method": "nvmf_set_config", 00:14:05.645 "params": { 00:14:05.645 "discovery_filter": "match_any", 00:14:05.645 "admin_cmd_passthru": { 00:14:05.645 "identify_ctrlr": false 00:14:05.645 }, 00:14:05.645 "dhchap_digests": [ 00:14:05.645 "sha256", 00:14:05.645 "sha384", 00:14:05.645 "sha512" 00:14:05.645 ], 00:14:05.645 "dhchap_dhgroups": [ 00:14:05.645 "null", 00:14:05.645 "ffdhe2048", 00:14:05.645 "ffdhe3072", 00:14:05.645 "ffdhe4096", 00:14:05.645 "ffdhe6144", 00:14:05.645 "ffdhe8192" 00:14:05.645 ] 00:14:05.645 } 00:14:05.645 }, 00:14:05.645 { 00:14:05.645 "method": "nvmf_set_max_subsystems", 00:14:05.645 "params": { 00:14:05.645 "max_subsystems": 1024 00:14:05.645 } 00:14:05.645 }, 00:14:05.645 { 00:14:05.645 "method": "nvmf_set_crdt", 00:14:05.645 "params": { 00:14:05.645 "crdt1": 0, 00:14:05.645 "crdt2": 0, 00:14:05.645 "crdt3": 0 00:14:05.645 } 00:14:05.645 }, 00:14:05.645 { 00:14:05.645 "method": "nvmf_create_transport", 00:14:05.645 "params": { 00:14:05.645 "trtype": "TCP", 00:14:05.645 "max_queue_depth": 128, 00:14:05.645 "max_io_qpairs_per_ctrlr": 127, 00:14:05.645 "in_capsule_data_size": 4096, 00:14:05.645 "max_io_size": 131072, 00:14:05.645 "io_unit_size": 131072, 00:14:05.645 "max_aq_depth": 128, 00:14:05.645 "num_shared_buffers": 511, 00:14:05.645 "buf_cache_size": 4294967295, 00:14:05.645 "dif_insert_or_strip": false, 00:14:05.645 "zcopy": false, 00:14:05.645 "c2h_success": false, 00:14:05.645 "sock_priority": 0, 00:14:05.645 "abort_timeout_sec": 1, 00:14:05.645 "ack_timeout": 0, 00:14:05.645 "data_wr_pool_size": 0 00:14:05.645 } 00:14:05.645 }, 00:14:05.645 { 00:14:05.645 "method": "nvmf_create_subsystem", 00:14:05.645 "params": { 00:14:05.645 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:05.645 "allow_any_host": false, 00:14:05.645 "serial_number": "SPDK00000000000001", 00:14:05.645 "model_number": "SPDK bdev Controller", 00:14:05.645 "max_namespaces": 10, 00:14:05.645 "min_cntlid": 1, 00:14:05.645 "max_cntlid": 65519, 00:14:05.645 "ana_reporting": false 00:14:05.645 } 00:14:05.645 }, 00:14:05.645 { 00:14:05.645 "method": "nvmf_subsystem_add_host", 00:14:05.645 "params": { 00:14:05.645 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:05.645 "host": "nqn.2016-06.io.spdk:host1", 00:14:05.645 "psk": "key0" 00:14:05.645 } 00:14:05.645 }, 00:14:05.645 { 00:14:05.645 "method": "nvmf_subsystem_add_ns", 00:14:05.645 "params": { 00:14:05.645 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:05.645 "namespace": { 00:14:05.645 "nsid": 1, 00:14:05.645 "bdev_name": "malloc0", 00:14:05.645 "nguid": "AF522689502F4ADDA28C96B65EAFE651", 00:14:05.645 "uuid": "af522689-502f-4add-a28c-96b65eafe651", 00:14:05.645 "no_auto_visible": false 00:14:05.645 } 00:14:05.645 } 00:14:05.645 }, 00:14:05.646 { 00:14:05.646 "method": "nvmf_subsystem_add_listener", 00:14:05.646 "params": { 00:14:05.646 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:05.646 "listen_address": { 00:14:05.646 "trtype": "TCP", 00:14:05.646 "adrfam": "IPv4", 00:14:05.646 "traddr": "10.0.0.3", 00:14:05.646 "trsvcid": "4420" 00:14:05.646 }, 00:14:05.646 "secure_channel": true 00:14:05.646 } 00:14:05.646 } 00:14:05.646 ] 00:14:05.646 } 00:14:05.646 ] 00:14:05.646 }' 00:14:05.646 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:05.646 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:05.646 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:05.646 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72315 00:14:05.646 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:05.646 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72315 00:14:05.646 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72315 ']' 00:14:05.646 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.646 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:05.646 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.646 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:05.646 03:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:05.646 [2024-10-09 03:16:48.846391] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:14:05.646 [2024-10-09 03:16:48.846742] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.905 [2024-10-09 03:16:48.982884] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.905 [2024-10-09 03:16:49.119679] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.905 [2024-10-09 03:16:49.119755] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.905 [2024-10-09 03:16:49.119766] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:05.905 [2024-10-09 03:16:49.119774] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:05.905 [2024-10-09 03:16:49.119781] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.905 [2024-10-09 03:16:49.120328] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.164 [2024-10-09 03:16:49.308258] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:06.164 [2024-10-09 03:16:49.403440] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:06.164 [2024-10-09 03:16:49.441460] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:06.164 [2024-10-09 03:16:49.441728] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:06.733 03:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:06.733 03:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:06.733 03:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:06.733 03:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:06.733 03:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:06.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:06.733 03:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.733 03:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=72353 00:14:06.733 03:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 72353 /var/tmp/bdevperf.sock 00:14:06.733 03:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72353 ']' 00:14:06.733 03:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:06.733 03:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:06.733 03:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:06.733 03:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:06.733 03:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:06.733 03:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:06.733 03:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:14:06.733 "subsystems": [ 00:14:06.733 { 00:14:06.733 "subsystem": "keyring", 00:14:06.733 "config": [ 00:14:06.733 { 00:14:06.733 "method": "keyring_file_add_key", 00:14:06.733 "params": { 00:14:06.733 "name": "key0", 00:14:06.733 "path": "/tmp/tmp.KoRS3lc0K0" 00:14:06.733 } 00:14:06.733 } 00:14:06.733 ] 00:14:06.733 }, 00:14:06.733 { 00:14:06.733 "subsystem": "iobuf", 00:14:06.733 "config": [ 00:14:06.733 { 00:14:06.733 "method": "iobuf_set_options", 00:14:06.733 "params": { 00:14:06.733 "small_pool_count": 8192, 00:14:06.733 "large_pool_count": 1024, 00:14:06.733 "small_bufsize": 8192, 00:14:06.733 "large_bufsize": 135168 00:14:06.733 } 00:14:06.733 } 00:14:06.733 ] 00:14:06.733 }, 00:14:06.733 { 00:14:06.733 "subsystem": "sock", 00:14:06.733 "config": [ 00:14:06.733 { 00:14:06.733 "method": "sock_set_default_impl", 00:14:06.733 "params": { 00:14:06.733 "impl_name": "uring" 00:14:06.733 } 00:14:06.733 }, 00:14:06.733 { 00:14:06.733 "method": "sock_impl_set_options", 00:14:06.733 "params": { 00:14:06.733 "impl_name": "ssl", 00:14:06.733 "recv_buf_size": 4096, 00:14:06.733 "send_buf_size": 4096, 00:14:06.733 "enable_recv_pipe": true, 00:14:06.733 "enable_quickack": false, 00:14:06.733 "enable_placement_id": 0, 00:14:06.733 "enable_zerocopy_send_server": true, 00:14:06.733 "enable_zerocopy_send_client": false, 00:14:06.733 "zerocopy_threshold": 0, 00:14:06.734 "tls_version": 0, 00:14:06.734 "enable_ktls": false 00:14:06.734 } 00:14:06.734 }, 00:14:06.734 { 00:14:06.734 "method": "sock_impl_set_options", 00:14:06.734 "params": { 00:14:06.734 "impl_name": "posix", 00:14:06.734 "recv_buf_size": 2097152, 00:14:06.734 "send_buf_size": 2097152, 00:14:06.734 "enable_recv_pipe": true, 00:14:06.734 "enable_quickack": false, 00:14:06.734 "enable_placement_id": 0, 00:14:06.734 "enable_zerocopy_send_server": true, 00:14:06.734 "enable_zerocopy_send_client": false, 00:14:06.734 "zerocopy_threshold": 0, 00:14:06.734 "tls_version": 0, 00:14:06.734 "enable_ktls": false 00:14:06.734 } 00:14:06.734 }, 00:14:06.734 { 00:14:06.734 "method": "sock_impl_set_options", 00:14:06.734 "params": { 00:14:06.734 "impl_name": "uring", 00:14:06.734 "recv_buf_size": 2097152, 00:14:06.734 "send_buf_size": 2097152, 00:14:06.734 "enable_recv_pipe": true, 00:14:06.734 "enable_quickack": false, 00:14:06.734 "enable_placement_id": 0, 00:14:06.734 "enable_zerocopy_send_server": false, 00:14:06.734 "enable_zerocopy_send_client": false, 00:14:06.734 "zerocopy_threshold": 0, 00:14:06.734 "tls_version": 0, 00:14:06.734 "enable_ktls": false 00:14:06.734 } 00:14:06.734 } 00:14:06.734 ] 00:14:06.734 }, 00:14:06.734 { 00:14:06.734 "subsystem": "vmd", 00:14:06.734 "config": [] 00:14:06.734 }, 00:14:06.734 { 00:14:06.734 "subsystem": "accel", 00:14:06.734 "config": [ 00:14:06.734 { 00:14:06.734 "method": "accel_set_options", 00:14:06.734 "params": { 00:14:06.734 "small_cache_size": 128, 00:14:06.734 "large_cache_size": 16, 00:14:06.734 "task_count": 2048, 00:14:06.734 "sequence_count": 2048, 00:14:06.734 "buf_count": 2048 00:14:06.734 } 00:14:06.734 } 00:14:06.734 ] 00:14:06.734 }, 00:14:06.734 { 00:14:06.734 "subsystem": "bdev", 00:14:06.734 "config": [ 00:14:06.734 { 00:14:06.734 "method": "bdev_set_options", 00:14:06.734 "params": { 00:14:06.734 "bdev_io_pool_size": 65535, 00:14:06.734 "bdev_io_cache_size": 256, 00:14:06.734 "bdev_auto_examine": true, 00:14:06.734 "iobuf_small_cache_size": 128, 00:14:06.734 "iobuf_large_cache_size": 16 00:14:06.734 } 00:14:06.734 }, 00:14:06.734 { 00:14:06.734 "method": "bdev_raid_set_options", 00:14:06.734 "params": { 00:14:06.734 "process_window_size_kb": 1024, 00:14:06.734 "process_max_bandwidth_mb_sec": 0 00:14:06.734 } 00:14:06.734 }, 00:14:06.734 { 00:14:06.734 "method": "bdev_iscsi_set_options", 00:14:06.734 "params": { 00:14:06.734 "timeout_sec": 30 00:14:06.734 } 00:14:06.734 }, 00:14:06.734 { 00:14:06.734 "method": "bdev_nvme_set_options", 00:14:06.734 "params": { 00:14:06.734 "action_on_timeout": "none", 00:14:06.734 "timeout_us": 0, 00:14:06.734 "timeout_admin_us": 0, 00:14:06.734 "keep_alive_timeout_ms": 10000, 00:14:06.734 "arbitration_burst": 0, 00:14:06.734 "low_priority_weight": 0, 00:14:06.734 "medium_priority_weight": 0, 00:14:06.734 "high_priority_weight": 0, 00:14:06.734 "nvme_adminq_poll_period_us": 10000, 00:14:06.734 "nvme_ioq_poll_period_us": 0, 00:14:06.734 "io_queue_requests": 512, 00:14:06.734 "delay_cmd_submit": true, 00:14:06.734 "transport_retry_count": 4, 00:14:06.734 "bdev_retry_count": 3, 00:14:06.734 "transport_ack_timeout": 0, 00:14:06.734 "ctrlr_loss_timeout_sec": 0, 00:14:06.734 "reconnect_delay_sec": 0, 00:14:06.734 "fast_io_fail_timeout_sec": 0, 00:14:06.734 "disable_auto_failback": false, 00:14:06.734 "generate_uuids": false, 00:14:06.734 "transport_tos": 0, 00:14:06.734 "nvme_error_stat": false, 00:14:06.734 "rdma_srq_size": 0, 00:14:06.734 "io_path_stat": false, 00:14:06.734 "allow_accel_sequence": false, 00:14:06.734 "rdma_max_cq_size": 0, 00:14:06.734 "rdma_cm_event_timeout_ms": 0, 00:14:06.734 "dhchap_digests": [ 00:14:06.734 "sha256", 00:14:06.734 "sha384", 00:14:06.734 "sha512" 00:14:06.734 ], 00:14:06.734 "dhchap_dhgroups": [ 00:14:06.734 "null", 00:14:06.734 "ffdhe2048", 00:14:06.734 "ffdhe3072", 00:14:06.734 "ffdhe4096", 00:14:06.734 "ffdhe6144", 00:14:06.734 "ffdhe8192" 00:14:06.734 ] 00:14:06.734 } 00:14:06.734 }, 00:14:06.734 { 00:14:06.734 "method": "bdev_nvme_attach_controller", 00:14:06.734 "params": { 00:14:06.734 "name": "TLSTEST", 00:14:06.734 "trtype": "TCP", 00:14:06.734 "adrfam": "IPv4", 00:14:06.734 "traddr": "10.0.0.3", 00:14:06.734 "trsvcid": "4420", 00:14:06.734 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:06.734 "prchk_reftag": false, 00:14:06.734 "prchk_guard": false, 00:14:06.734 "ctrlr_loss_timeout_sec": 0, 00:14:06.734 "reconnect_delay_sec": 0, 00:14:06.734 "fast_io_fail_timeout_sec": 0, 00:14:06.734 "psk": "key0", 00:14:06.734 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:06.734 "hdgst": false, 00:14:06.734 "ddgst": false, 00:14:06.734 "multipath": "multipath" 00:14:06.734 } 00:14:06.734 }, 00:14:06.734 { 00:14:06.734 "method": "bdev_nvme_set_hotplug", 00:14:06.734 "params": { 00:14:06.734 "period_us": 100000, 00:14:06.734 "enable": false 00:14:06.734 } 00:14:06.734 }, 00:14:06.734 { 00:14:06.734 "method": "bdev_wait_for_examine" 00:14:06.734 } 00:14:06.734 ] 00:14:06.734 }, 00:14:06.734 { 00:14:06.734 "subsystem": "nbd", 00:14:06.734 "config": [] 00:14:06.734 } 00:14:06.734 ] 00:14:06.734 }' 00:14:06.734 [2024-10-09 03:16:49.984201] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:14:06.734 [2024-10-09 03:16:49.984304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72353 ] 00:14:06.993 [2024-10-09 03:16:50.122089] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.993 [2024-10-09 03:16:50.240228] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:07.252 [2024-10-09 03:16:50.373291] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:07.252 [2024-10-09 03:16:50.419333] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:07.832 03:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:07.832 03:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:07.832 03:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:07.833 Running I/O for 10 seconds... 00:14:10.164 3978.00 IOPS, 15.54 MiB/s [2024-10-09T03:16:54.404Z] 4067.00 IOPS, 15.89 MiB/s [2024-10-09T03:16:55.341Z] 4088.00 IOPS, 15.97 MiB/s [2024-10-09T03:16:56.277Z] 4091.00 IOPS, 15.98 MiB/s [2024-10-09T03:16:57.214Z] 4101.20 IOPS, 16.02 MiB/s [2024-10-09T03:16:58.150Z] 4100.67 IOPS, 16.02 MiB/s [2024-10-09T03:16:59.527Z] 4105.71 IOPS, 16.04 MiB/s [2024-10-09T03:17:00.463Z] 4110.88 IOPS, 16.06 MiB/s [2024-10-09T03:17:01.400Z] 4117.11 IOPS, 16.08 MiB/s [2024-10-09T03:17:01.400Z] 4109.60 IOPS, 16.05 MiB/s 00:14:18.097 Latency(us) 00:14:18.097 [2024-10-09T03:17:01.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.097 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:18.097 Verification LBA range: start 0x0 length 0x2000 00:14:18.097 TLSTESTn1 : 10.02 4115.94 16.08 0.00 0.00 31046.94 4647.10 23235.49 00:14:18.097 [2024-10-09T03:17:01.400Z] =================================================================================================================== 00:14:18.097 [2024-10-09T03:17:01.400Z] Total : 4115.94 16.08 0.00 0.00 31046.94 4647.10 23235.49 00:14:18.097 { 00:14:18.097 "results": [ 00:14:18.097 { 00:14:18.097 "job": "TLSTESTn1", 00:14:18.097 "core_mask": "0x4", 00:14:18.097 "workload": "verify", 00:14:18.098 "status": "finished", 00:14:18.098 "verify_range": { 00:14:18.098 "start": 0, 00:14:18.098 "length": 8192 00:14:18.098 }, 00:14:18.098 "queue_depth": 128, 00:14:18.098 "io_size": 4096, 00:14:18.098 "runtime": 10.0152, 00:14:18.098 "iops": 4115.943765476476, 00:14:18.098 "mibps": 16.077905333892485, 00:14:18.098 "io_failed": 0, 00:14:18.098 "io_timeout": 0, 00:14:18.098 "avg_latency_us": 31046.938922111316, 00:14:18.098 "min_latency_us": 4647.098181818182, 00:14:18.098 "max_latency_us": 23235.49090909091 00:14:18.098 } 00:14:18.098 ], 00:14:18.098 "core_count": 1 00:14:18.098 } 00:14:18.098 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:18.098 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 72353 00:14:18.098 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72353 ']' 00:14:18.098 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72353 00:14:18.098 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:18.098 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:18.098 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72353 00:14:18.098 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:18.098 killing process with pid 72353 00:14:18.098 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:18.098 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72353' 00:14:18.098 Received shutdown signal, test time was about 10.000000 seconds 00:14:18.098 00:14:18.098 Latency(us) 00:14:18.098 [2024-10-09T03:17:01.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.098 [2024-10-09T03:17:01.401Z] =================================================================================================================== 00:14:18.098 [2024-10-09T03:17:01.401Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:18.098 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72353 00:14:18.098 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72353 00:14:18.383 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 72315 00:14:18.383 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72315 ']' 00:14:18.383 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72315 00:14:18.383 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:18.383 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:18.383 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72315 00:14:18.383 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:18.383 killing process with pid 72315 00:14:18.383 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:18.383 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72315' 00:14:18.383 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72315 00:14:18.383 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72315 00:14:18.648 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:14:18.648 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:18.648 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:18.648 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:18.648 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72486 00:14:18.648 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:18.648 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72486 00:14:18.648 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72486 ']' 00:14:18.648 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.648 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:18.648 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.648 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:18.648 03:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:18.648 [2024-10-09 03:17:01.897350] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:14:18.648 [2024-10-09 03:17:01.897470] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.908 [2024-10-09 03:17:02.035828] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.908 [2024-10-09 03:17:02.137528] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.908 [2024-10-09 03:17:02.137598] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.908 [2024-10-09 03:17:02.137610] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:18.908 [2024-10-09 03:17:02.137618] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:18.908 [2024-10-09 03:17:02.137625] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.908 [2024-10-09 03:17:02.138111] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.908 [2024-10-09 03:17:02.200459] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:19.167 03:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:19.167 03:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:19.167 03:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:19.167 03:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:19.167 03:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:19.167 03:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:19.167 03:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.KoRS3lc0K0 00:14:19.167 03:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.KoRS3lc0K0 00:14:19.167 03:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:19.426 [2024-10-09 03:17:02.617325] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:19.426 03:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:19.685 03:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:19.944 [2024-10-09 03:17:03.149373] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:19.944 [2024-10-09 03:17:03.149665] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:19.944 03:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:20.203 malloc0 00:14:20.203 03:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:20.462 03:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.KoRS3lc0K0 00:14:20.721 03:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:20.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:20.980 03:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72540 00:14:20.980 03:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:20.980 03:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:20.980 03:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72540 /var/tmp/bdevperf.sock 00:14:20.980 03:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72540 ']' 00:14:20.980 03:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:20.980 03:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:20.980 03:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:20.980 03:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:20.980 03:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:21.239 [2024-10-09 03:17:04.298015] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:14:21.240 [2024-10-09 03:17:04.298142] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72540 ] 00:14:21.240 [2024-10-09 03:17:04.434512] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.499 [2024-10-09 03:17:04.584721] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.499 [2024-10-09 03:17:04.664389] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:22.066 03:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:22.066 03:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:22.066 03:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KoRS3lc0K0 00:14:22.325 03:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:22.583 [2024-10-09 03:17:05.834433] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:22.841 nvme0n1 00:14:22.841 03:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:22.841 Running I/O for 1 seconds... 00:14:23.783 3678.00 IOPS, 14.37 MiB/s 00:14:23.783 Latency(us) 00:14:23.783 [2024-10-09T03:17:07.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.783 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:23.783 Verification LBA range: start 0x0 length 0x2000 00:14:23.783 nvme0n1 : 1.03 3693.52 14.43 0.00 0.00 34151.27 7626.01 21090.68 00:14:23.783 [2024-10-09T03:17:07.086Z] =================================================================================================================== 00:14:23.783 [2024-10-09T03:17:07.086Z] Total : 3693.52 14.43 0.00 0.00 34151.27 7626.01 21090.68 00:14:23.783 { 00:14:23.783 "results": [ 00:14:23.783 { 00:14:23.783 "job": "nvme0n1", 00:14:23.783 "core_mask": "0x2", 00:14:23.783 "workload": "verify", 00:14:23.783 "status": "finished", 00:14:23.783 "verify_range": { 00:14:23.783 "start": 0, 00:14:23.783 "length": 8192 00:14:23.783 }, 00:14:23.783 "queue_depth": 128, 00:14:23.783 "io_size": 4096, 00:14:23.783 "runtime": 1.030725, 00:14:23.783 "iops": 3693.516699410609, 00:14:23.783 "mibps": 14.427799607072691, 00:14:23.783 "io_failed": 0, 00:14:23.783 "io_timeout": 0, 00:14:23.783 "avg_latency_us": 34151.27173388733, 00:14:23.783 "min_latency_us": 7626.007272727273, 00:14:23.783 "max_latency_us": 21090.676363636365 00:14:23.783 } 00:14:23.783 ], 00:14:23.783 "core_count": 1 00:14:23.783 } 00:14:23.783 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72540 00:14:23.783 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72540 ']' 00:14:23.783 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72540 00:14:23.783 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:24.041 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:24.041 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72540 00:14:24.041 killing process with pid 72540 00:14:24.041 Received shutdown signal, test time was about 1.000000 seconds 00:14:24.041 00:14:24.041 Latency(us) 00:14:24.041 [2024-10-09T03:17:07.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:24.041 [2024-10-09T03:17:07.344Z] =================================================================================================================== 00:14:24.041 [2024-10-09T03:17:07.344Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:24.041 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:24.041 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:24.041 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72540' 00:14:24.041 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72540 00:14:24.041 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72540 00:14:24.299 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72486 00:14:24.299 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72486 ']' 00:14:24.300 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72486 00:14:24.300 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:24.300 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:24.300 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72486 00:14:24.300 killing process with pid 72486 00:14:24.300 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:24.300 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:24.300 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72486' 00:14:24.300 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72486 00:14:24.300 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72486 00:14:24.558 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:14:24.558 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:24.558 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:24.558 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:24.558 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72591 00:14:24.558 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:24.558 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72591 00:14:24.558 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72591 ']' 00:14:24.558 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.558 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:24.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.558 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.558 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:24.558 03:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:24.558 [2024-10-09 03:17:07.763238] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:14:24.558 [2024-10-09 03:17:07.763343] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.817 [2024-10-09 03:17:07.895654] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.817 [2024-10-09 03:17:07.983940] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:24.817 [2024-10-09 03:17:07.984026] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:24.817 [2024-10-09 03:17:07.984052] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:24.817 [2024-10-09 03:17:07.984060] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:24.817 [2024-10-09 03:17:07.984076] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:24.817 [2024-10-09 03:17:07.984481] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.817 [2024-10-09 03:17:08.038991] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:24.817 03:17:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:24.817 03:17:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:24.817 03:17:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:24.817 03:17:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:24.817 03:17:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:25.076 03:17:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:25.076 03:17:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:14:25.076 03:17:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.076 03:17:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:25.076 [2024-10-09 03:17:08.164756] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:25.076 malloc0 00:14:25.076 [2024-10-09 03:17:08.209878] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:25.076 [2024-10-09 03:17:08.210136] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:25.076 03:17:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.076 03:17:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72615 00:14:25.076 03:17:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:25.076 03:17:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72615 /var/tmp/bdevperf.sock 00:14:25.076 03:17:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72615 ']' 00:14:25.076 03:17:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:25.076 03:17:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:25.076 03:17:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:25.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:25.076 03:17:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:25.076 03:17:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:25.076 [2024-10-09 03:17:08.301554] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:14:25.076 [2024-10-09 03:17:08.301653] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72615 ] 00:14:25.334 [2024-10-09 03:17:08.439752] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.334 [2024-10-09 03:17:08.577616] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.593 [2024-10-09 03:17:08.655558] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:26.159 03:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:26.159 03:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:26.159 03:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KoRS3lc0K0 00:14:26.417 03:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:26.675 [2024-10-09 03:17:09.902006] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:26.675 nvme0n1 00:14:26.936 03:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:26.936 Running I/O for 1 seconds... 00:14:27.873 4001.00 IOPS, 15.63 MiB/s 00:14:27.873 Latency(us) 00:14:27.873 [2024-10-09T03:17:11.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.873 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:27.873 Verification LBA range: start 0x0 length 0x2000 00:14:27.873 nvme0n1 : 1.02 4062.27 15.87 0.00 0.00 31220.76 6315.29 25976.09 00:14:27.873 [2024-10-09T03:17:11.176Z] =================================================================================================================== 00:14:27.873 [2024-10-09T03:17:11.176Z] Total : 4062.27 15.87 0.00 0.00 31220.76 6315.29 25976.09 00:14:27.873 { 00:14:27.873 "results": [ 00:14:27.873 { 00:14:27.873 "job": "nvme0n1", 00:14:27.873 "core_mask": "0x2", 00:14:27.873 "workload": "verify", 00:14:27.873 "status": "finished", 00:14:27.873 "verify_range": { 00:14:27.873 "start": 0, 00:14:27.873 "length": 8192 00:14:27.873 }, 00:14:27.873 "queue_depth": 128, 00:14:27.873 "io_size": 4096, 00:14:27.873 "runtime": 1.016426, 00:14:27.873 "iops": 4062.273102026119, 00:14:27.873 "mibps": 15.868254304789527, 00:14:27.873 "io_failed": 0, 00:14:27.873 "io_timeout": 0, 00:14:27.873 "avg_latency_us": 31220.76370858011, 00:14:27.873 "min_latency_us": 6315.2872727272725, 00:14:27.873 "max_latency_us": 25976.087272727273 00:14:27.873 } 00:14:27.873 ], 00:14:27.873 "core_count": 1 00:14:27.873 } 00:14:27.873 03:17:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:14:27.873 03:17:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.873 03:17:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:28.133 03:17:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.133 03:17:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:14:28.133 "subsystems": [ 00:14:28.133 { 00:14:28.133 "subsystem": "keyring", 00:14:28.133 "config": [ 00:14:28.133 { 00:14:28.133 "method": "keyring_file_add_key", 00:14:28.133 "params": { 00:14:28.133 "name": "key0", 00:14:28.133 "path": "/tmp/tmp.KoRS3lc0K0" 00:14:28.133 } 00:14:28.133 } 00:14:28.133 ] 00:14:28.133 }, 00:14:28.133 { 00:14:28.133 "subsystem": "iobuf", 00:14:28.133 "config": [ 00:14:28.133 { 00:14:28.133 "method": "iobuf_set_options", 00:14:28.133 "params": { 00:14:28.133 "small_pool_count": 8192, 00:14:28.133 "large_pool_count": 1024, 00:14:28.133 "small_bufsize": 8192, 00:14:28.133 "large_bufsize": 135168 00:14:28.133 } 00:14:28.133 } 00:14:28.133 ] 00:14:28.133 }, 00:14:28.133 { 00:14:28.133 "subsystem": "sock", 00:14:28.133 "config": [ 00:14:28.133 { 00:14:28.133 "method": "sock_set_default_impl", 00:14:28.133 "params": { 00:14:28.133 "impl_name": "uring" 00:14:28.133 } 00:14:28.133 }, 00:14:28.133 { 00:14:28.133 "method": "sock_impl_set_options", 00:14:28.133 "params": { 00:14:28.133 "impl_name": "ssl", 00:14:28.133 "recv_buf_size": 4096, 00:14:28.133 "send_buf_size": 4096, 00:14:28.133 "enable_recv_pipe": true, 00:14:28.133 "enable_quickack": false, 00:14:28.133 "enable_placement_id": 0, 00:14:28.133 "enable_zerocopy_send_server": true, 00:14:28.133 "enable_zerocopy_send_client": false, 00:14:28.133 "zerocopy_threshold": 0, 00:14:28.133 "tls_version": 0, 00:14:28.133 "enable_ktls": false 00:14:28.133 } 00:14:28.133 }, 00:14:28.133 { 00:14:28.133 "method": "sock_impl_set_options", 00:14:28.133 "params": { 00:14:28.133 "impl_name": "posix", 00:14:28.133 "recv_buf_size": 2097152, 00:14:28.133 "send_buf_size": 2097152, 00:14:28.133 "enable_recv_pipe": true, 00:14:28.133 "enable_quickack": false, 00:14:28.133 "enable_placement_id": 0, 00:14:28.133 "enable_zerocopy_send_server": true, 00:14:28.133 "enable_zerocopy_send_client": false, 00:14:28.133 "zerocopy_threshold": 0, 00:14:28.133 "tls_version": 0, 00:14:28.133 "enable_ktls": false 00:14:28.133 } 00:14:28.133 }, 00:14:28.133 { 00:14:28.133 "method": "sock_impl_set_options", 00:14:28.133 "params": { 00:14:28.133 "impl_name": "uring", 00:14:28.133 "recv_buf_size": 2097152, 00:14:28.133 "send_buf_size": 2097152, 00:14:28.133 "enable_recv_pipe": true, 00:14:28.133 "enable_quickack": false, 00:14:28.133 "enable_placement_id": 0, 00:14:28.133 "enable_zerocopy_send_server": false, 00:14:28.133 "enable_zerocopy_send_client": false, 00:14:28.133 "zerocopy_threshold": 0, 00:14:28.133 "tls_version": 0, 00:14:28.133 "enable_ktls": false 00:14:28.133 } 00:14:28.133 } 00:14:28.133 ] 00:14:28.133 }, 00:14:28.133 { 00:14:28.133 "subsystem": "vmd", 00:14:28.133 "config": [] 00:14:28.133 }, 00:14:28.133 { 00:14:28.133 "subsystem": "accel", 00:14:28.133 "config": [ 00:14:28.133 { 00:14:28.133 "method": "accel_set_options", 00:14:28.133 "params": { 00:14:28.133 "small_cache_size": 128, 00:14:28.133 "large_cache_size": 16, 00:14:28.133 "task_count": 2048, 00:14:28.133 "sequence_count": 2048, 00:14:28.133 "buf_count": 2048 00:14:28.133 } 00:14:28.134 } 00:14:28.134 ] 00:14:28.134 }, 00:14:28.134 { 00:14:28.134 "subsystem": "bdev", 00:14:28.134 "config": [ 00:14:28.134 { 00:14:28.134 "method": "bdev_set_options", 00:14:28.134 "params": { 00:14:28.134 "bdev_io_pool_size": 65535, 00:14:28.134 "bdev_io_cache_size": 256, 00:14:28.134 "bdev_auto_examine": true, 00:14:28.134 "iobuf_small_cache_size": 128, 00:14:28.134 "iobuf_large_cache_size": 16 00:14:28.134 } 00:14:28.134 }, 00:14:28.134 { 00:14:28.134 "method": "bdev_raid_set_options", 00:14:28.134 "params": { 00:14:28.134 "process_window_size_kb": 1024, 00:14:28.134 "process_max_bandwidth_mb_sec": 0 00:14:28.134 } 00:14:28.134 }, 00:14:28.134 { 00:14:28.134 "method": "bdev_iscsi_set_options", 00:14:28.134 "params": { 00:14:28.134 "timeout_sec": 30 00:14:28.134 } 00:14:28.134 }, 00:14:28.134 { 00:14:28.134 "method": "bdev_nvme_set_options", 00:14:28.134 "params": { 00:14:28.134 "action_on_timeout": "none", 00:14:28.134 "timeout_us": 0, 00:14:28.134 "timeout_admin_us": 0, 00:14:28.134 "keep_alive_timeout_ms": 10000, 00:14:28.134 "arbitration_burst": 0, 00:14:28.134 "low_priority_weight": 0, 00:14:28.134 "medium_priority_weight": 0, 00:14:28.134 "high_priority_weight": 0, 00:14:28.134 "nvme_adminq_poll_period_us": 10000, 00:14:28.134 "nvme_ioq_poll_period_us": 0, 00:14:28.134 "io_queue_requests": 0, 00:14:28.134 "delay_cmd_submit": true, 00:14:28.134 "transport_retry_count": 4, 00:14:28.134 "bdev_retry_count": 3, 00:14:28.134 "transport_ack_timeout": 0, 00:14:28.134 "ctrlr_loss_timeout_sec": 0, 00:14:28.134 "reconnect_delay_sec": 0, 00:14:28.134 "fast_io_fail_timeout_sec": 0, 00:14:28.134 "disable_auto_failback": false, 00:14:28.134 "generate_uuids": false, 00:14:28.134 "transport_tos": 0, 00:14:28.134 "nvme_error_stat": false, 00:14:28.134 "rdma_srq_size": 0, 00:14:28.134 "io_path_stat": false, 00:14:28.134 "allow_accel_sequence": false, 00:14:28.134 "rdma_max_cq_size": 0, 00:14:28.134 "rdma_cm_event_timeout_ms": 0, 00:14:28.134 "dhchap_digests": [ 00:14:28.134 "sha256", 00:14:28.134 "sha384", 00:14:28.134 "sha512" 00:14:28.134 ], 00:14:28.134 "dhchap_dhgroups": [ 00:14:28.134 "null", 00:14:28.134 "ffdhe2048", 00:14:28.134 "ffdhe3072", 00:14:28.134 "ffdhe4096", 00:14:28.134 "ffdhe6144", 00:14:28.134 "ffdhe8192" 00:14:28.134 ] 00:14:28.134 } 00:14:28.134 }, 00:14:28.134 { 00:14:28.134 "method": "bdev_nvme_set_hotplug", 00:14:28.134 "params": { 00:14:28.134 "period_us": 100000, 00:14:28.134 "enable": false 00:14:28.134 } 00:14:28.134 }, 00:14:28.134 { 00:14:28.134 "method": "bdev_malloc_create", 00:14:28.134 "params": { 00:14:28.134 "name": "malloc0", 00:14:28.134 "num_blocks": 8192, 00:14:28.134 "block_size": 4096, 00:14:28.134 "physical_block_size": 4096, 00:14:28.134 "uuid": "023de5cc-5644-4646-a8a2-ed948ee727df", 00:14:28.134 "optimal_io_boundary": 0, 00:14:28.134 "md_size": 0, 00:14:28.134 "dif_type": 0, 00:14:28.134 "dif_is_head_of_md": false, 00:14:28.134 "dif_pi_format": 0 00:14:28.134 } 00:14:28.134 }, 00:14:28.134 { 00:14:28.134 "method": "bdev_wait_for_examine" 00:14:28.134 } 00:14:28.134 ] 00:14:28.134 }, 00:14:28.134 { 00:14:28.134 "subsystem": "nbd", 00:14:28.134 "config": [] 00:14:28.134 }, 00:14:28.134 { 00:14:28.134 "subsystem": "scheduler", 00:14:28.134 "config": [ 00:14:28.134 { 00:14:28.134 "method": "framework_set_scheduler", 00:14:28.134 "params": { 00:14:28.134 "name": "static" 00:14:28.134 } 00:14:28.134 } 00:14:28.134 ] 00:14:28.134 }, 00:14:28.134 { 00:14:28.134 "subsystem": "nvmf", 00:14:28.134 "config": [ 00:14:28.134 { 00:14:28.134 "method": "nvmf_set_config", 00:14:28.134 "params": { 00:14:28.134 "discovery_filter": "match_any", 00:14:28.134 "admin_cmd_passthru": { 00:14:28.134 "identify_ctrlr": false 00:14:28.134 }, 00:14:28.134 "dhchap_digests": [ 00:14:28.134 "sha256", 00:14:28.134 "sha384", 00:14:28.134 "sha512" 00:14:28.134 ], 00:14:28.134 "dhchap_dhgroups": [ 00:14:28.134 "null", 00:14:28.134 "ffdhe2048", 00:14:28.134 "ffdhe3072", 00:14:28.134 "ffdhe4096", 00:14:28.134 "ffdhe6144", 00:14:28.134 "ffdhe8192" 00:14:28.134 ] 00:14:28.134 } 00:14:28.134 }, 00:14:28.134 { 00:14:28.134 "method": "nvmf_set_max_subsystems", 00:14:28.134 "params": { 00:14:28.134 "max_subsystems": 1024 00:14:28.134 } 00:14:28.134 }, 00:14:28.134 { 00:14:28.134 "method": "nvmf_set_crdt", 00:14:28.134 "params": { 00:14:28.134 "crdt1": 0, 00:14:28.134 "crdt2": 0, 00:14:28.134 "crdt3": 0 00:14:28.134 } 00:14:28.134 }, 00:14:28.134 { 00:14:28.134 "method": "nvmf_create_transport", 00:14:28.134 "params": { 00:14:28.134 "trtype": "TCP", 00:14:28.134 "max_queue_depth": 128, 00:14:28.134 "max_io_qpairs_per_ctrlr": 127, 00:14:28.134 "in_capsule_data_size": 4096, 00:14:28.134 "max_io_size": 131072, 00:14:28.134 "io_unit_size": 131072, 00:14:28.134 "max_aq_depth": 128, 00:14:28.134 "num_shared_buffers": 511, 00:14:28.134 "buf_cache_size": 4294967295, 00:14:28.134 "dif_insert_or_strip": false, 00:14:28.134 "zcopy": false, 00:14:28.134 "c2h_success": false, 00:14:28.134 "sock_priority": 0, 00:14:28.134 "abort_timeout_sec": 1, 00:14:28.135 "ack_timeout": 0, 00:14:28.135 "data_wr_pool_size": 0 00:14:28.135 } 00:14:28.135 }, 00:14:28.135 { 00:14:28.135 "method": "nvmf_create_subsystem", 00:14:28.135 "params": { 00:14:28.135 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:28.135 "allow_any_host": false, 00:14:28.135 "serial_number": "00000000000000000000", 00:14:28.135 "model_number": "SPDK bdev Controller", 00:14:28.135 "max_namespaces": 32, 00:14:28.135 "min_cntlid": 1, 00:14:28.135 "max_cntlid": 65519, 00:14:28.135 "ana_reporting": false 00:14:28.135 } 00:14:28.135 }, 00:14:28.135 { 00:14:28.135 "method": "nvmf_subsystem_add_host", 00:14:28.135 "params": { 00:14:28.135 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:28.135 "host": "nqn.2016-06.io.spdk:host1", 00:14:28.135 "psk": "key0" 00:14:28.135 } 00:14:28.135 }, 00:14:28.135 { 00:14:28.135 "method": "nvmf_subsystem_add_ns", 00:14:28.135 "params": { 00:14:28.135 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:28.135 "namespace": { 00:14:28.135 "nsid": 1, 00:14:28.135 "bdev_name": "malloc0", 00:14:28.135 "nguid": "023DE5CC56444646A8A2ED948EE727DF", 00:14:28.135 "uuid": "023de5cc-5644-4646-a8a2-ed948ee727df", 00:14:28.135 "no_auto_visible": false 00:14:28.135 } 00:14:28.135 } 00:14:28.135 }, 00:14:28.135 { 00:14:28.135 "method": "nvmf_subsystem_add_listener", 00:14:28.135 "params": { 00:14:28.135 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:28.135 "listen_address": { 00:14:28.135 "trtype": "TCP", 00:14:28.135 "adrfam": "IPv4", 00:14:28.135 "traddr": "10.0.0.3", 00:14:28.135 "trsvcid": "4420" 00:14:28.135 }, 00:14:28.135 "secure_channel": false, 00:14:28.135 "sock_impl": "ssl" 00:14:28.135 } 00:14:28.135 } 00:14:28.135 ] 00:14:28.135 } 00:14:28.135 ] 00:14:28.135 }' 00:14:28.135 03:17:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:28.395 03:17:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:14:28.395 "subsystems": [ 00:14:28.395 { 00:14:28.395 "subsystem": "keyring", 00:14:28.395 "config": [ 00:14:28.395 { 00:14:28.395 "method": "keyring_file_add_key", 00:14:28.395 "params": { 00:14:28.395 "name": "key0", 00:14:28.395 "path": "/tmp/tmp.KoRS3lc0K0" 00:14:28.395 } 00:14:28.395 } 00:14:28.395 ] 00:14:28.395 }, 00:14:28.395 { 00:14:28.395 "subsystem": "iobuf", 00:14:28.395 "config": [ 00:14:28.395 { 00:14:28.395 "method": "iobuf_set_options", 00:14:28.395 "params": { 00:14:28.395 "small_pool_count": 8192, 00:14:28.395 "large_pool_count": 1024, 00:14:28.395 "small_bufsize": 8192, 00:14:28.395 "large_bufsize": 135168 00:14:28.395 } 00:14:28.395 } 00:14:28.395 ] 00:14:28.395 }, 00:14:28.395 { 00:14:28.395 "subsystem": "sock", 00:14:28.395 "config": [ 00:14:28.395 { 00:14:28.395 "method": "sock_set_default_impl", 00:14:28.395 "params": { 00:14:28.395 "impl_name": "uring" 00:14:28.395 } 00:14:28.395 }, 00:14:28.395 { 00:14:28.395 "method": "sock_impl_set_options", 00:14:28.395 "params": { 00:14:28.395 "impl_name": "ssl", 00:14:28.395 "recv_buf_size": 4096, 00:14:28.395 "send_buf_size": 4096, 00:14:28.395 "enable_recv_pipe": true, 00:14:28.395 "enable_quickack": false, 00:14:28.395 "enable_placement_id": 0, 00:14:28.395 "enable_zerocopy_send_server": true, 00:14:28.395 "enable_zerocopy_send_client": false, 00:14:28.395 "zerocopy_threshold": 0, 00:14:28.395 "tls_version": 0, 00:14:28.395 "enable_ktls": false 00:14:28.395 } 00:14:28.395 }, 00:14:28.395 { 00:14:28.395 "method": "sock_impl_set_options", 00:14:28.395 "params": { 00:14:28.395 "impl_name": "posix", 00:14:28.395 "recv_buf_size": 2097152, 00:14:28.395 "send_buf_size": 2097152, 00:14:28.395 "enable_recv_pipe": true, 00:14:28.395 "enable_quickack": false, 00:14:28.395 "enable_placement_id": 0, 00:14:28.395 "enable_zerocopy_send_server": true, 00:14:28.395 "enable_zerocopy_send_client": false, 00:14:28.395 "zerocopy_threshold": 0, 00:14:28.395 "tls_version": 0, 00:14:28.395 "enable_ktls": false 00:14:28.395 } 00:14:28.395 }, 00:14:28.395 { 00:14:28.395 "method": "sock_impl_set_options", 00:14:28.395 "params": { 00:14:28.395 "impl_name": "uring", 00:14:28.395 "recv_buf_size": 2097152, 00:14:28.395 "send_buf_size": 2097152, 00:14:28.395 "enable_recv_pipe": true, 00:14:28.395 "enable_quickack": false, 00:14:28.395 "enable_placement_id": 0, 00:14:28.395 "enable_zerocopy_send_server": false, 00:14:28.395 "enable_zerocopy_send_client": false, 00:14:28.395 "zerocopy_threshold": 0, 00:14:28.395 "tls_version": 0, 00:14:28.395 "enable_ktls": false 00:14:28.395 } 00:14:28.395 } 00:14:28.395 ] 00:14:28.395 }, 00:14:28.395 { 00:14:28.395 "subsystem": "vmd", 00:14:28.395 "config": [] 00:14:28.395 }, 00:14:28.395 { 00:14:28.395 "subsystem": "accel", 00:14:28.395 "config": [ 00:14:28.395 { 00:14:28.395 "method": "accel_set_options", 00:14:28.395 "params": { 00:14:28.395 "small_cache_size": 128, 00:14:28.395 "large_cache_size": 16, 00:14:28.395 "task_count": 2048, 00:14:28.395 "sequence_count": 2048, 00:14:28.395 "buf_count": 2048 00:14:28.395 } 00:14:28.395 } 00:14:28.395 ] 00:14:28.395 }, 00:14:28.395 { 00:14:28.395 "subsystem": "bdev", 00:14:28.395 "config": [ 00:14:28.395 { 00:14:28.395 "method": "bdev_set_options", 00:14:28.395 "params": { 00:14:28.395 "bdev_io_pool_size": 65535, 00:14:28.395 "bdev_io_cache_size": 256, 00:14:28.395 "bdev_auto_examine": true, 00:14:28.395 "iobuf_small_cache_size": 128, 00:14:28.395 "iobuf_large_cache_size": 16 00:14:28.395 } 00:14:28.395 }, 00:14:28.395 { 00:14:28.395 "method": "bdev_raid_set_options", 00:14:28.395 "params": { 00:14:28.395 "process_window_size_kb": 1024, 00:14:28.395 "process_max_bandwidth_mb_sec": 0 00:14:28.395 } 00:14:28.395 }, 00:14:28.395 { 00:14:28.395 "method": "bdev_iscsi_set_options", 00:14:28.395 "params": { 00:14:28.395 "timeout_sec": 30 00:14:28.395 } 00:14:28.395 }, 00:14:28.395 { 00:14:28.395 "method": "bdev_nvme_set_options", 00:14:28.395 "params": { 00:14:28.395 "action_on_timeout": "none", 00:14:28.395 "timeout_us": 0, 00:14:28.395 "timeout_admin_us": 0, 00:14:28.395 "keep_alive_timeout_ms": 10000, 00:14:28.395 "arbitration_burst": 0, 00:14:28.395 "low_priority_weight": 0, 00:14:28.395 "medium_priority_weight": 0, 00:14:28.395 "high_priority_weight": 0, 00:14:28.395 "nvme_adminq_poll_period_us": 10000, 00:14:28.395 "nvme_ioq_poll_period_us": 0, 00:14:28.395 "io_queue_requests": 512, 00:14:28.395 "delay_cmd_submit": true, 00:14:28.395 "transport_retry_count": 4, 00:14:28.395 "bdev_retry_count": 3, 00:14:28.395 "transport_ack_timeout": 0, 00:14:28.395 "ctrlr_loss_timeout_sec": 0, 00:14:28.395 "reconnect_delay_sec": 0, 00:14:28.395 "fast_io_fail_timeout_sec": 0, 00:14:28.395 "disable_auto_failback": false, 00:14:28.395 "generate_uuids": false, 00:14:28.395 "transport_tos": 0, 00:14:28.395 "nvme_error_stat": false, 00:14:28.395 "rdma_srq_size": 0, 00:14:28.395 "io_path_stat": false, 00:14:28.395 "allow_accel_sequence": false, 00:14:28.395 "rdma_max_cq_size": 0, 00:14:28.395 "rdma_cm_event_timeout_ms": 0, 00:14:28.395 "dhchap_digests": [ 00:14:28.395 "sha256", 00:14:28.395 "sha384", 00:14:28.395 "sha512" 00:14:28.395 ], 00:14:28.395 "dhchap_dhgroups": [ 00:14:28.395 "null", 00:14:28.395 "ffdhe2048", 00:14:28.395 "ffdhe3072", 00:14:28.395 "ffdhe4096", 00:14:28.395 "ffdhe6144", 00:14:28.395 "ffdhe8192" 00:14:28.395 ] 00:14:28.395 } 00:14:28.395 }, 00:14:28.395 { 00:14:28.395 "method": "bdev_nvme_attach_controller", 00:14:28.395 "params": { 00:14:28.395 "name": "nvme0", 00:14:28.395 "trtype": "TCP", 00:14:28.396 "adrfam": "IPv4", 00:14:28.396 "traddr": "10.0.0.3", 00:14:28.396 "trsvcid": "4420", 00:14:28.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:28.396 "prchk_reftag": false, 00:14:28.396 "prchk_guard": false, 00:14:28.396 "ctrlr_loss_timeout_sec": 0, 00:14:28.396 "reconnect_delay_sec": 0, 00:14:28.396 "fast_io_fail_timeout_sec": 0, 00:14:28.396 "psk": "key0", 00:14:28.396 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:28.396 "hdgst": false, 00:14:28.396 "ddgst": false, 00:14:28.396 "multipath": "multipath" 00:14:28.396 } 00:14:28.396 }, 00:14:28.396 { 00:14:28.396 "method": "bdev_nvme_set_hotplug", 00:14:28.396 "params": { 00:14:28.396 "period_us": 100000, 00:14:28.396 "enable": false 00:14:28.396 } 00:14:28.396 }, 00:14:28.396 { 00:14:28.396 "method": "bdev_enable_histogram", 00:14:28.396 "params": { 00:14:28.396 "name": "nvme0n1", 00:14:28.396 "enable": true 00:14:28.396 } 00:14:28.396 }, 00:14:28.396 { 00:14:28.396 "method": "bdev_wait_for_examine" 00:14:28.396 } 00:14:28.396 ] 00:14:28.396 }, 00:14:28.396 { 00:14:28.396 "subsystem": "nbd", 00:14:28.396 "config": [] 00:14:28.396 } 00:14:28.396 ] 00:14:28.396 }' 00:14:28.396 03:17:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72615 00:14:28.396 03:17:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72615 ']' 00:14:28.396 03:17:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72615 00:14:28.396 03:17:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:28.396 03:17:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:28.396 03:17:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72615 00:14:28.396 03:17:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:28.396 03:17:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:28.396 killing process with pid 72615 00:14:28.396 03:17:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72615' 00:14:28.396 03:17:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72615 00:14:28.396 Received shutdown signal, test time was about 1.000000 seconds 00:14:28.396 00:14:28.396 Latency(us) 00:14:28.396 [2024-10-09T03:17:11.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.396 [2024-10-09T03:17:11.699Z] =================================================================================================================== 00:14:28.396 [2024-10-09T03:17:11.699Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:28.396 03:17:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72615 00:14:28.962 03:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72591 00:14:28.962 03:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72591 ']' 00:14:28.962 03:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72591 00:14:28.962 03:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:28.962 03:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:28.962 03:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72591 00:14:28.962 killing process with pid 72591 00:14:28.962 03:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:28.962 03:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:28.962 03:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72591' 00:14:28.962 03:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72591 00:14:28.962 03:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72591 00:14:29.222 03:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:14:29.222 03:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:29.222 03:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:14:29.222 "subsystems": [ 00:14:29.222 { 00:14:29.222 "subsystem": "keyring", 00:14:29.222 "config": [ 00:14:29.222 { 00:14:29.222 "method": "keyring_file_add_key", 00:14:29.222 "params": { 00:14:29.222 "name": "key0", 00:14:29.222 "path": "/tmp/tmp.KoRS3lc0K0" 00:14:29.222 } 00:14:29.222 } 00:14:29.222 ] 00:14:29.222 }, 00:14:29.222 { 00:14:29.222 "subsystem": "iobuf", 00:14:29.222 "config": [ 00:14:29.222 { 00:14:29.222 "method": "iobuf_set_options", 00:14:29.222 "params": { 00:14:29.222 "small_pool_count": 8192, 00:14:29.222 "large_pool_count": 1024, 00:14:29.222 "small_bufsize": 8192, 00:14:29.222 "large_bufsize": 135168 00:14:29.222 } 00:14:29.222 } 00:14:29.222 ] 00:14:29.222 }, 00:14:29.222 { 00:14:29.222 "subsystem": "sock", 00:14:29.222 "config": [ 00:14:29.222 { 00:14:29.222 "method": "sock_set_default_impl", 00:14:29.222 "params": { 00:14:29.222 "impl_name": "uring" 00:14:29.222 } 00:14:29.222 }, 00:14:29.222 { 00:14:29.222 "method": "sock_impl_set_options", 00:14:29.222 "params": { 00:14:29.222 "impl_name": "ssl", 00:14:29.222 "recv_buf_size": 4096, 00:14:29.222 "send_buf_size": 4096, 00:14:29.222 "enable_recv_pipe": true, 00:14:29.222 "enable_quickack": false, 00:14:29.222 "enable_placement_id": 0, 00:14:29.222 "enable_zerocopy_send_server": true, 00:14:29.222 "enable_zerocopy_send_client": false, 00:14:29.222 "zerocopy_threshold": 0, 00:14:29.222 "tls_version": 0, 00:14:29.222 "enable_ktls": false 00:14:29.222 } 00:14:29.222 }, 00:14:29.222 { 00:14:29.222 "method": "sock_impl_set_options", 00:14:29.222 "params": { 00:14:29.222 "impl_name": "posix", 00:14:29.222 "recv_buf_size": 2097152, 00:14:29.222 "send_buf_size": 2097152, 00:14:29.222 "enable_recv_pipe": true, 00:14:29.222 "enable_quickack": false, 00:14:29.222 "enable_placement_id": 0, 00:14:29.222 "enable_zerocopy_send_server": true, 00:14:29.222 "enable_zerocopy_send_client": false, 00:14:29.222 "zerocopy_threshold": 0, 00:14:29.222 "tls_version": 0, 00:14:29.222 "enable_ktls": false 00:14:29.222 } 00:14:29.222 }, 00:14:29.222 { 00:14:29.222 "method": "sock_impl_set_options", 00:14:29.222 "params": { 00:14:29.222 "impl_name": "uring", 00:14:29.222 "recv_buf_size": 2097152, 00:14:29.222 "send_buf_size": 2097152, 00:14:29.222 "enable_recv_pipe": true, 00:14:29.222 "enable_quickack": false, 00:14:29.222 "enable_placement_id": 0, 00:14:29.222 "enable_zerocopy_send_server": false, 00:14:29.222 "enable_zerocopy_send_client": false, 00:14:29.222 "zerocopy_threshold": 0, 00:14:29.222 "tls_version": 0, 00:14:29.222 "enable_ktls": false 00:14:29.222 } 00:14:29.222 } 00:14:29.222 ] 00:14:29.222 }, 00:14:29.222 { 00:14:29.222 "subsystem": "vmd", 00:14:29.222 "config": [] 00:14:29.222 }, 00:14:29.222 { 00:14:29.222 "subsystem": "accel", 00:14:29.222 "config": [ 00:14:29.222 { 00:14:29.222 "method": "accel_set_options", 00:14:29.222 "params": { 00:14:29.222 "small_cache_size": 128, 00:14:29.222 "large_cache_size": 16, 00:14:29.222 "task_count": 2048, 00:14:29.222 "sequence_count": 2048, 00:14:29.222 "buf_count": 2048 00:14:29.222 } 00:14:29.222 } 00:14:29.222 ] 00:14:29.222 }, 00:14:29.222 { 00:14:29.222 "subsystem": "bdev", 00:14:29.222 "config": [ 00:14:29.222 { 00:14:29.222 "method": "bdev_set_options", 00:14:29.222 "params": { 00:14:29.222 "bdev_io_pool_size": 65535, 00:14:29.222 "bdev_io_cache_size": 256, 00:14:29.222 "bdev_auto_examine": true, 00:14:29.222 "iobuf_small_cache_size": 128, 00:14:29.222 "iobuf_large_cache_size": 16 00:14:29.222 } 00:14:29.222 }, 00:14:29.222 { 00:14:29.222 "method": "bdev_raid_set_options", 00:14:29.222 "params": { 00:14:29.222 "process_window_size_kb": 1024, 00:14:29.222 "process_max_bandwidth_mb_sec": 0 00:14:29.222 } 00:14:29.222 }, 00:14:29.222 { 00:14:29.222 "method": "bdev_iscsi_set_options", 00:14:29.222 "params": { 00:14:29.222 "timeout_sec": 30 00:14:29.222 } 00:14:29.222 }, 00:14:29.222 { 00:14:29.222 "method": "bdev_nvme_set_options", 00:14:29.222 "params": { 00:14:29.222 "action_on_timeout": "none", 00:14:29.222 "timeout_us": 0, 00:14:29.222 "timeout_admin_us": 0, 00:14:29.222 "keep_alive_timeout_ms": 10000, 00:14:29.222 "arbitration_burst": 0, 00:14:29.222 "low_priority_weight": 0, 00:14:29.222 "medium_priority_weight": 0, 00:14:29.222 "high_priority_weight": 0, 00:14:29.222 "nvme_adminq_poll_period_us": 10000, 00:14:29.222 "nvme_ioq_poll_period_us": 0, 00:14:29.222 "io_queue_requests": 0, 00:14:29.222 "delay_cmd_submit": true, 00:14:29.223 "transport_retry_count": 4, 00:14:29.223 "bdev_retry_count": 3, 00:14:29.223 "transport_ack_timeout": 0, 00:14:29.223 "ctrlr_loss_timeout_sec": 0, 00:14:29.223 "reconnect_delay_sec": 0, 00:14:29.223 "fast_io_fail_timeout_sec": 0, 00:14:29.223 "disable_auto_failback": false, 00:14:29.223 "generate_uuids": false, 00:14:29.223 "transport_tos": 0, 00:14:29.223 "nvme_error_stat": false, 00:14:29.223 "rdma_srq_size": 0, 00:14:29.223 "io_path_stat": false, 00:14:29.223 "allow_accel_sequence": false, 00:14:29.223 "rdma_max_cq_size": 0, 00:14:29.223 "rdma_cm_event_timeout_ms": 0, 00:14:29.223 "dhchap_digests": [ 00:14:29.223 "sha256", 00:14:29.223 "sha384", 00:14:29.223 "sha512" 00:14:29.223 ], 00:14:29.223 "dhchap_dhgroups": [ 00:14:29.223 "null", 00:14:29.223 "ffdhe2048", 00:14:29.223 "ffdhe3072", 00:14:29.223 "ffdhe4096", 00:14:29.223 "ffdhe6144", 00:14:29.223 "ffdhe8192" 00:14:29.223 ] 00:14:29.223 } 00:14:29.223 }, 00:14:29.223 { 00:14:29.223 "method": "bdev_nvme_set_hotplug", 00:14:29.223 "params": { 00:14:29.223 "period_us": 100000, 00:14:29.223 "enable": false 00:14:29.223 } 00:14:29.223 }, 00:14:29.223 { 00:14:29.223 "method": "bdev_malloc_create", 00:14:29.223 "params": { 00:14:29.223 "name": "malloc0", 00:14:29.223 "num_blocks": 8192, 00:14:29.223 "block_size": 4096, 00:14:29.223 "physical_block_size": 4096, 00:14:29.223 "uuid": "023de5cc-5644-4646-a8a2-ed948ee727df", 00:14:29.223 "optimal_io_boundary": 0, 00:14:29.223 "md_size": 0, 00:14:29.223 "dif_type": 0, 00:14:29.223 "dif_is_head_of_md": false, 00:14:29.223 "dif_pi_format": 0 00:14:29.223 } 00:14:29.223 }, 00:14:29.223 { 00:14:29.223 "method": "bdev_wait_for_examine" 00:14:29.223 } 00:14:29.223 ] 00:14:29.223 }, 00:14:29.223 { 00:14:29.223 "subsystem": "nbd", 00:14:29.223 "config": [] 00:14:29.223 }, 00:14:29.223 { 00:14:29.223 "subsystem": "scheduler", 00:14:29.223 "config": [ 00:14:29.223 { 00:14:29.223 "method": "framework_set_scheduler", 00:14:29.223 "params": { 00:14:29.223 "name": "static" 00:14:29.223 } 00:14:29.223 } 00:14:29.223 ] 00:14:29.223 }, 00:14:29.223 { 00:14:29.223 "subsystem": "nvmf", 00:14:29.223 "config": [ 00:14:29.223 { 00:14:29.223 "method": "nvmf_set_config", 00:14:29.223 "params": { 00:14:29.223 "discovery_filter": "match_any", 00:14:29.223 "admin_cmd_passthru": { 00:14:29.223 "identify_ctrlr": false 00:14:29.223 }, 00:14:29.223 "dhchap_digests": [ 00:14:29.223 "sha256", 00:14:29.223 "sha384", 00:14:29.223 "sha512" 00:14:29.223 ], 00:14:29.223 "dhchap_dhgroups": [ 00:14:29.223 "null", 00:14:29.223 "ffdhe2048", 00:14:29.223 "ffdhe3072", 00:14:29.223 "ffdhe4096", 00:14:29.223 "ffdhe6144", 00:14:29.223 "ffdhe8192" 00:14:29.223 ] 00:14:29.223 } 00:14:29.223 }, 00:14:29.223 { 00:14:29.223 "method": "nvmf_set_max_subsystems", 00:14:29.223 "params": { 00:14:29.223 "max_subsystems": 1024 00:14:29.223 } 00:14:29.223 }, 00:14:29.223 { 00:14:29.223 "method": "nvmf_set_crdt", 00:14:29.223 "params": { 00:14:29.223 "crdt1": 0, 00:14:29.223 "crdt2": 0, 00:14:29.223 "crdt3": 0 00:14:29.223 } 00:14:29.223 }, 00:14:29.223 { 00:14:29.223 "method": "nvmf_create_transport", 00:14:29.223 "params": { 00:14:29.223 "trtype": "TCP", 00:14:29.223 "max_queue_depth": 128, 00:14:29.223 "max_io_qpairs_per_ctrlr": 127, 00:14:29.223 "in_capsule_data_size": 4096, 00:14:29.223 "max_io_size": 131072, 00:14:29.223 "io_unit_size": 131072, 00:14:29.223 "max_aq_depth": 128, 00:14:29.223 "num_shared_buffers": 511, 00:14:29.223 "buf_cache_size": 4294967295, 00:14:29.223 "dif_insert_or_strip": false, 00:14:29.223 "zcopy": false, 00:14:29.223 "c2h_success": false, 00:14:29.223 "sock_priority": 0, 00:14:29.223 "abort_timeout_sec": 1, 00:14:29.223 "ack_timeout": 0, 00:14:29.223 "data_wr_pool_size": 0 00:14:29.223 } 00:14:29.223 }, 00:14:29.223 { 00:14:29.223 "method": "nvmf_create_subsystem", 00:14:29.223 "params": { 00:14:29.223 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:29.223 "allow_any_host": false, 00:14:29.223 "serial_number": "00000000000000000000", 00:14:29.223 "model_number": "SPDK bdev Controller", 00:14:29.223 "max_namespaces": 32, 00:14:29.223 "min_cntlid": 1, 00:14:29.223 "max_cntlid": 65519, 00:14:29.223 "ana_reporting": false 00:14:29.223 } 00:14:29.223 }, 00:14:29.223 { 00:14:29.223 "method": "nvmf_subsystem_add_host", 00:14:29.223 "params": { 00:14:29.223 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:29.223 "host": "nqn.2016-06.io.spdk:host1", 00:14:29.223 "psk": "key0" 00:14:29.223 } 00:14:29.223 }, 00:14:29.223 { 00:14:29.223 "method": "nvmf_subsystem_add_ns", 00:14:29.223 "params": { 00:14:29.223 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:29.223 "namespace": { 00:14:29.223 "nsid": 1, 00:14:29.223 "bdev_name": "malloc0", 00:14:29.223 "nguid": "023DE5CC56444646A8A2ED948EE727DF", 00:14:29.223 "uuid": "023de5cc-5644-4646-a8a2-ed948ee727df", 00:14:29.223 "no_auto_visible": false 00:14:29.223 } 00:14:29.223 } 00:14:29.223 }, 00:14:29.223 { 00:14:29.223 "method": "nvmf_subsystem_add_listener", 00:14:29.223 "params": { 00:14:29.223 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:29.223 "listen_address": { 00:14:29.223 "trtype": "TCP", 00:14:29.223 "adrfam": "IPv4", 00:14:29.223 "traddr": "10.0.0.3", 00:14:29.223 "trsvcid": "4420" 00:14:29.223 }, 00:14:29.223 "secure_channel": false, 00:14:29.223 "sock_impl": "ssl" 00:14:29.223 } 00:14:29.223 } 00:14:29.223 ] 00:14:29.223 } 00:14:29.223 ] 00:14:29.223 }' 00:14:29.223 03:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:29.223 03:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:29.223 03:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:29.223 03:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72676 00:14:29.223 03:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72676 00:14:29.223 03:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72676 ']' 00:14:29.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.223 03:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.223 03:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:29.223 03:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.223 03:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:29.223 03:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:29.223 [2024-10-09 03:17:12.325383] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:14:29.223 [2024-10-09 03:17:12.325470] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.223 [2024-10-09 03:17:12.457114] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.482 [2024-10-09 03:17:12.542921] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:29.482 [2024-10-09 03:17:12.543357] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:29.482 [2024-10-09 03:17:12.543378] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:29.482 [2024-10-09 03:17:12.543387] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:29.482 [2024-10-09 03:17:12.543394] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:29.482 [2024-10-09 03:17:12.543890] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.482 [2024-10-09 03:17:12.711124] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:29.741 [2024-10-09 03:17:12.788613] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:29.741 [2024-10-09 03:17:12.835038] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:29.741 [2024-10-09 03:17:12.835448] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:30.001 03:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:30.001 03:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:30.001 03:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:30.001 03:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:30.001 03:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:30.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:30.260 03:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.260 03:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72709 00:14:30.260 03:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72709 /var/tmp/bdevperf.sock 00:14:30.260 03:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72709 ']' 00:14:30.260 03:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:30.260 03:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:30.260 03:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:30.260 03:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:30.260 03:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:30.260 03:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:30.260 03:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:14:30.260 "subsystems": [ 00:14:30.260 { 00:14:30.260 "subsystem": "keyring", 00:14:30.260 "config": [ 00:14:30.260 { 00:14:30.260 "method": "keyring_file_add_key", 00:14:30.260 "params": { 00:14:30.260 "name": "key0", 00:14:30.260 "path": "/tmp/tmp.KoRS3lc0K0" 00:14:30.260 } 00:14:30.260 } 00:14:30.260 ] 00:14:30.260 }, 00:14:30.260 { 00:14:30.260 "subsystem": "iobuf", 00:14:30.260 "config": [ 00:14:30.260 { 00:14:30.260 "method": "iobuf_set_options", 00:14:30.260 "params": { 00:14:30.260 "small_pool_count": 8192, 00:14:30.260 "large_pool_count": 1024, 00:14:30.260 "small_bufsize": 8192, 00:14:30.260 "large_bufsize": 135168 00:14:30.260 } 00:14:30.260 } 00:14:30.260 ] 00:14:30.260 }, 00:14:30.260 { 00:14:30.260 "subsystem": "sock", 00:14:30.260 "config": [ 00:14:30.260 { 00:14:30.260 "method": "sock_set_default_impl", 00:14:30.260 "params": { 00:14:30.260 "impl_name": "uring" 00:14:30.260 } 00:14:30.260 }, 00:14:30.260 { 00:14:30.260 "method": "sock_impl_set_options", 00:14:30.260 "params": { 00:14:30.260 "impl_name": "ssl", 00:14:30.260 "recv_buf_size": 4096, 00:14:30.260 "send_buf_size": 4096, 00:14:30.260 "enable_recv_pipe": true, 00:14:30.260 "enable_quickack": false, 00:14:30.260 "enable_placement_id": 0, 00:14:30.260 "enable_zerocopy_send_server": true, 00:14:30.260 "enable_zerocopy_send_client": false, 00:14:30.260 "zerocopy_threshold": 0, 00:14:30.260 "tls_version": 0, 00:14:30.260 "enable_ktls": false 00:14:30.260 } 00:14:30.260 }, 00:14:30.260 { 00:14:30.260 "method": "sock_impl_set_options", 00:14:30.260 "params": { 00:14:30.260 "impl_name": "posix", 00:14:30.260 "recv_buf_size": 2097152, 00:14:30.260 "send_buf_size": 2097152, 00:14:30.260 "enable_recv_pipe": true, 00:14:30.260 "enable_quickack": false, 00:14:30.260 "enable_placement_id": 0, 00:14:30.260 "enable_zerocopy_send_server": true, 00:14:30.260 "enable_zerocopy_send_client": false, 00:14:30.260 "zerocopy_threshold": 0, 00:14:30.261 "tls_version": 0, 00:14:30.261 "enable_ktls": false 00:14:30.261 } 00:14:30.261 }, 00:14:30.261 { 00:14:30.261 "method": "sock_impl_set_options", 00:14:30.261 "params": { 00:14:30.261 "impl_name": "uring", 00:14:30.261 "recv_buf_size": 2097152, 00:14:30.261 "send_buf_size": 2097152, 00:14:30.261 "enable_recv_pipe": true, 00:14:30.261 "enable_quickack": false, 00:14:30.261 "enable_placement_id": 0, 00:14:30.261 "enable_zerocopy_send_server": false, 00:14:30.261 "enable_zerocopy_send_client": false, 00:14:30.261 "zerocopy_threshold": 0, 00:14:30.261 "tls_version": 0, 00:14:30.261 "enable_ktls": false 00:14:30.261 } 00:14:30.261 } 00:14:30.261 ] 00:14:30.261 }, 00:14:30.261 { 00:14:30.261 "subsystem": "vmd", 00:14:30.261 "config": [] 00:14:30.261 }, 00:14:30.261 { 00:14:30.261 "subsystem": "accel", 00:14:30.261 "config": [ 00:14:30.261 { 00:14:30.261 "method": "accel_set_options", 00:14:30.261 "params": { 00:14:30.261 "small_cache_size": 128, 00:14:30.261 "large_cache_size": 16, 00:14:30.261 "task_count": 2048, 00:14:30.261 "sequence_count": 2048, 00:14:30.261 "buf_count": 2048 00:14:30.261 } 00:14:30.261 } 00:14:30.261 ] 00:14:30.261 }, 00:14:30.261 { 00:14:30.261 "subsystem": "bdev", 00:14:30.261 "config": [ 00:14:30.261 { 00:14:30.261 "method": "bdev_set_options", 00:14:30.261 "params": { 00:14:30.261 "bdev_io_pool_size": 65535, 00:14:30.261 "bdev_io_cache_size": 256, 00:14:30.261 "bdev_auto_examine": true, 00:14:30.261 "iobuf_small_cache_size": 128, 00:14:30.261 "iobuf_large_cache_size": 16 00:14:30.261 } 00:14:30.261 }, 00:14:30.261 { 00:14:30.261 "method": "bdev_raid_set_options", 00:14:30.261 "params": { 00:14:30.261 "process_window_size_kb": 1024, 00:14:30.261 "process_max_bandwidth_mb_sec": 0 00:14:30.261 } 00:14:30.261 }, 00:14:30.261 { 00:14:30.261 "method": "bdev_iscsi_set_options", 00:14:30.261 "params": { 00:14:30.261 "timeout_sec": 30 00:14:30.261 } 00:14:30.261 }, 00:14:30.261 { 00:14:30.261 "method": "bdev_nvme_set_options", 00:14:30.261 "params": { 00:14:30.261 "action_on_timeout": "none", 00:14:30.261 "timeout_us": 0, 00:14:30.261 "timeout_admin_us": 0, 00:14:30.261 "keep_alive_timeout_ms": 10000, 00:14:30.261 "arbitration_burst": 0, 00:14:30.261 "low_priority_weight": 0, 00:14:30.261 "medium_priority_weight": 0, 00:14:30.261 "high_priority_weight": 0, 00:14:30.261 "nvme_adminq_poll_period_us": 10000, 00:14:30.261 "nvme_ioq_poll_period_us": 0, 00:14:30.261 "io_queue_requests": 512, 00:14:30.261 "delay_cmd_submit": true, 00:14:30.261 "transport_retry_count": 4, 00:14:30.261 "bdev_retry_count": 3, 00:14:30.261 "transport_ack_timeout": 0, 00:14:30.261 "ctrlr_loss_timeout_sec": 0, 00:14:30.261 "reconnect_delay_sec": 0, 00:14:30.261 "fast_io_fail_timeout_sec": 0, 00:14:30.261 "disable_auto_failback": false, 00:14:30.261 "generate_uuids": false, 00:14:30.261 "transport_tos": 0, 00:14:30.261 "nvme_error_stat": false, 00:14:30.261 "rdma_srq_size": 0, 00:14:30.261 "io_path_stat": false, 00:14:30.261 "allow_accel_sequence": false, 00:14:30.261 "rdma_max_cq_size": 0, 00:14:30.261 "rdma_cm_event_timeout_ms": 0, 00:14:30.261 "dhchap_digests": [ 00:14:30.261 "sha256", 00:14:30.261 "sha384", 00:14:30.261 "sha512" 00:14:30.261 ], 00:14:30.261 "dhchap_dhgroups": [ 00:14:30.261 "null", 00:14:30.261 "ffdhe2048", 00:14:30.261 "ffdhe3072", 00:14:30.261 "ffdhe4096", 00:14:30.261 "ffdhe6144", 00:14:30.261 "ffdhe8192" 00:14:30.261 ] 00:14:30.261 } 00:14:30.261 }, 00:14:30.261 { 00:14:30.261 "method": "bdev_nvme_attach_controller", 00:14:30.261 "params": { 00:14:30.261 "name": "nvme0", 00:14:30.261 "trtype": "TCP", 00:14:30.261 "adrfam": "IPv4", 00:14:30.261 "traddr": "10.0.0.3", 00:14:30.261 "trsvcid": "4420", 00:14:30.261 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:30.261 "prchk_reftag": false, 00:14:30.261 "prchk_guard": false, 00:14:30.261 "ctrlr_loss_timeout_sec": 0, 00:14:30.261 "reconnect_delay_sec": 0, 00:14:30.261 "fast_io_fail_timeout_sec": 0, 00:14:30.261 "psk": "key0", 00:14:30.261 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:30.261 "hdgst": false, 00:14:30.261 "ddgst": false, 00:14:30.261 "multipath": "multipath" 00:14:30.261 } 00:14:30.261 }, 00:14:30.261 { 00:14:30.261 "method": "bdev_nvme_set_hotplug", 00:14:30.261 "params": { 00:14:30.261 "period_us": 100000, 00:14:30.261 "enable": false 00:14:30.261 } 00:14:30.261 }, 00:14:30.261 { 00:14:30.261 "method": "bdev_enable_histogram", 00:14:30.261 "params": { 00:14:30.261 "name": "nvme0n1", 00:14:30.261 "enable": true 00:14:30.261 } 00:14:30.261 }, 00:14:30.261 { 00:14:30.261 "method": "bdev_wait_for_examine" 00:14:30.261 } 00:14:30.261 ] 00:14:30.261 }, 00:14:30.261 { 00:14:30.261 "subsystem": "nbd", 00:14:30.261 "config": [] 00:14:30.261 } 00:14:30.261 ] 00:14:30.261 }' 00:14:30.261 [2024-10-09 03:17:13.391912] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:14:30.261 [2024-10-09 03:17:13.392247] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72709 ] 00:14:30.261 [2024-10-09 03:17:13.531439] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.520 [2024-10-09 03:17:13.635846] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.520 [2024-10-09 03:17:13.771094] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:30.520 [2024-10-09 03:17:13.820954] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:31.088 03:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:31.088 03:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:31.088 03:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:14:31.088 03:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:31.347 03:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.347 03:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:31.606 Running I/O for 1 seconds... 00:14:32.543 4444.00 IOPS, 17.36 MiB/s 00:14:32.543 Latency(us) 00:14:32.543 [2024-10-09T03:17:15.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.543 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:32.543 Verification LBA range: start 0x0 length 0x2000 00:14:32.543 nvme0n1 : 1.02 4464.87 17.44 0.00 0.00 28330.79 6911.07 18111.77 00:14:32.543 [2024-10-09T03:17:15.846Z] =================================================================================================================== 00:14:32.543 [2024-10-09T03:17:15.846Z] Total : 4464.87 17.44 0.00 0.00 28330.79 6911.07 18111.77 00:14:32.543 { 00:14:32.543 "results": [ 00:14:32.543 { 00:14:32.543 "job": "nvme0n1", 00:14:32.543 "core_mask": "0x2", 00:14:32.543 "workload": "verify", 00:14:32.543 "status": "finished", 00:14:32.543 "verify_range": { 00:14:32.543 "start": 0, 00:14:32.543 "length": 8192 00:14:32.543 }, 00:14:32.543 "queue_depth": 128, 00:14:32.543 "io_size": 4096, 00:14:32.543 "runtime": 1.024218, 00:14:32.543 "iops": 4464.86978358123, 00:14:32.543 "mibps": 17.44089759211418, 00:14:32.543 "io_failed": 0, 00:14:32.543 "io_timeout": 0, 00:14:32.543 "avg_latency_us": 28330.79342067074, 00:14:32.543 "min_latency_us": 6911.069090909091, 00:14:32.543 "max_latency_us": 18111.767272727273 00:14:32.543 } 00:14:32.543 ], 00:14:32.543 "core_count": 1 00:14:32.544 } 00:14:32.544 03:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:14:32.544 03:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:14:32.544 03:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:32.544 03:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:14:32.544 03:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:14:32.544 03:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:14:32.544 03:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:32.544 03:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:14:32.544 03:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:14:32.544 03:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:14:32.544 03:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:32.544 nvmf_trace.0 00:14:32.815 03:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:14:32.815 03:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72709 00:14:32.815 03:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72709 ']' 00:14:32.815 03:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72709 00:14:32.815 03:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:32.815 03:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:32.815 03:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72709 00:14:32.815 03:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:32.815 03:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:32.815 killing process with pid 72709 00:14:32.815 03:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72709' 00:14:32.815 Received shutdown signal, test time was about 1.000000 seconds 00:14:32.815 00:14:32.815 Latency(us) 00:14:32.815 [2024-10-09T03:17:16.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.815 [2024-10-09T03:17:16.118Z] =================================================================================================================== 00:14:32.815 [2024-10-09T03:17:16.118Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:32.815 03:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72709 00:14:32.815 03:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72709 00:14:33.074 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:33.074 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:33.074 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:14:33.074 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:33.074 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:14:33.074 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:33.074 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:33.074 rmmod nvme_tcp 00:14:33.074 rmmod nvme_fabrics 00:14:33.074 rmmod nvme_keyring 00:14:33.074 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:33.074 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:14:33.074 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:14:33.074 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 72676 ']' 00:14:33.074 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 72676 00:14:33.074 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72676 ']' 00:14:33.074 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72676 00:14:33.074 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:33.074 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:33.074 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72676 00:14:33.074 killing process with pid 72676 00:14:33.074 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:33.074 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:33.074 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72676' 00:14:33.074 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72676 00:14:33.074 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72676 00:14:33.333 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:33.333 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:33.333 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:33.333 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:14:33.333 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:14:33.333 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:33.333 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:14:33.333 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:33.333 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:33.333 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:33.333 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:33.333 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:33.333 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:33.333 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:33.333 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:33.333 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:33.333 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:33.333 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:33.592 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:33.592 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:33.592 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:33.592 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:33.592 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:33.592 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.592 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:33.592 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.592 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:14:33.592 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Kqg8vpcSEg /tmp/tmp.mjd2lJBVDt /tmp/tmp.KoRS3lc0K0 00:14:33.592 00:14:33.592 real 1m32.009s 00:14:33.592 user 2m28.983s 00:14:33.592 sys 0m29.877s 00:14:33.592 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:33.592 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:33.592 ************************************ 00:14:33.592 END TEST nvmf_tls 00:14:33.592 ************************************ 00:14:33.592 03:17:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:33.592 03:17:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:33.592 03:17:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:33.592 03:17:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:33.592 ************************************ 00:14:33.592 START TEST nvmf_fips 00:14:33.592 ************************************ 00:14:33.592 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:33.852 * Looking for test storage... 00:14:33.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:33.852 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:33.852 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:14:33.852 03:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:33.852 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:33.852 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:33.852 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:33.852 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:33.852 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:14:33.852 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:14:33.852 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:14:33.852 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:14:33.852 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:14:33.852 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:14:33.852 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:14:33.852 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:33.852 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:14:33.852 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:14:33.852 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:33.852 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:33.852 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:14:33.852 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:14:33.852 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:33.852 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:14:33.852 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:14:33.852 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:14:33.852 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:14:33.852 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:33.852 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:14:33.852 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:14:33.852 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:33.852 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:33.852 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:14:33.852 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:33.852 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:33.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.852 --rc genhtml_branch_coverage=1 00:14:33.852 --rc genhtml_function_coverage=1 00:14:33.852 --rc genhtml_legend=1 00:14:33.852 --rc geninfo_all_blocks=1 00:14:33.852 --rc geninfo_unexecuted_blocks=1 00:14:33.852 00:14:33.852 ' 00:14:33.852 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:33.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.852 --rc genhtml_branch_coverage=1 00:14:33.853 --rc genhtml_function_coverage=1 00:14:33.853 --rc genhtml_legend=1 00:14:33.853 --rc geninfo_all_blocks=1 00:14:33.853 --rc geninfo_unexecuted_blocks=1 00:14:33.853 00:14:33.853 ' 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:33.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.853 --rc genhtml_branch_coverage=1 00:14:33.853 --rc genhtml_function_coverage=1 00:14:33.853 --rc genhtml_legend=1 00:14:33.853 --rc geninfo_all_blocks=1 00:14:33.853 --rc geninfo_unexecuted_blocks=1 00:14:33.853 00:14:33.853 ' 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:33.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.853 --rc genhtml_branch_coverage=1 00:14:33.853 --rc genhtml_function_coverage=1 00:14:33.853 --rc genhtml_legend=1 00:14:33.853 --rc geninfo_all_blocks=1 00:14:33.853 --rc geninfo_unexecuted_blocks=1 00:14:33.853 00:14:33.853 ' 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:33.853 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:14:33.853 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:33.854 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:33.854 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:14:33.854 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:33.854 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:14:33.854 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:14:33.854 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:33.854 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:14:33.854 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:14:33.854 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:14:33.854 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:14:33.854 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:33.854 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:14:33.854 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:14:33.854 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:33.854 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:14:33.854 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:14:33.854 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:33.854 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:14:33.854 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:33.854 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:33.854 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:14:33.854 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:14:33.854 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:14:33.854 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:14:33.854 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:14:33.854 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:14:33.854 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:33.854 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:14:33.854 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:14:33.854 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:14:33.854 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:14:34.113 Error setting digest 00:14:34.113 40521CB2C27F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:14:34.113 40521CB2C27F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@458 -- # nvmf_veth_init 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:34.113 Cannot find device "nvmf_init_br" 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:34.113 Cannot find device "nvmf_init_br2" 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:34.113 Cannot find device "nvmf_tgt_br" 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:34.113 Cannot find device "nvmf_tgt_br2" 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:34.113 Cannot find device "nvmf_init_br" 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:34.113 Cannot find device "nvmf_init_br2" 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:34.113 Cannot find device "nvmf_tgt_br" 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:34.113 Cannot find device "nvmf_tgt_br2" 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:34.113 Cannot find device "nvmf_br" 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:34.113 Cannot find device "nvmf_init_if" 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:34.113 Cannot find device "nvmf_init_if2" 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:34.113 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:34.113 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:34.113 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:34.373 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:34.373 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:14:34.373 00:14:34.373 --- 10.0.0.3 ping statistics --- 00:14:34.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:34.373 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:34.373 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:34.373 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:14:34.373 00:14:34.373 --- 10.0.0.4 ping statistics --- 00:14:34.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:34.373 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:34.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:34.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:14:34.373 00:14:34.373 --- 10.0.0.1 ping statistics --- 00:14:34.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:34.373 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:34.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:34.373 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:14:34.373 00:14:34.373 --- 10.0.0.2 ping statistics --- 00:14:34.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:34.373 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # return 0 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=73035 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 73035 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 73035 ']' 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:34.373 03:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:34.632 [2024-10-09 03:17:17.761759] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:14:34.632 [2024-10-09 03:17:17.761865] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:34.632 [2024-10-09 03:17:17.905000] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.891 [2024-10-09 03:17:18.022348] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:34.891 [2024-10-09 03:17:18.022421] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:34.891 [2024-10-09 03:17:18.022449] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:34.891 [2024-10-09 03:17:18.022460] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:34.891 [2024-10-09 03:17:18.022469] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:34.891 [2024-10-09 03:17:18.022915] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:34.891 [2024-10-09 03:17:18.080338] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:35.532 03:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:35.532 03:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:14:35.532 03:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:35.532 03:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:35.532 03:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:35.790 03:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:35.790 03:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:14:35.790 03:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:35.790 03:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:14:35.790 03:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.jEM 00:14:35.790 03:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:35.790 03:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.jEM 00:14:35.790 03:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.jEM 00:14:35.790 03:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.jEM 00:14:35.790 03:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:36.049 [2024-10-09 03:17:19.100910] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:36.049 [2024-10-09 03:17:19.116857] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:36.049 [2024-10-09 03:17:19.117041] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:36.049 malloc0 00:14:36.049 03:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:36.049 03:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=73071 00:14:36.049 03:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:36.049 03:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 73071 /var/tmp/bdevperf.sock 00:14:36.049 03:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 73071 ']' 00:14:36.049 03:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:36.049 03:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:36.049 03:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:36.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:36.049 03:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:36.049 03:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:36.049 [2024-10-09 03:17:19.276616] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:14:36.049 [2024-10-09 03:17:19.276710] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73071 ] 00:14:36.307 [2024-10-09 03:17:19.418119] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.307 [2024-10-09 03:17:19.526245] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:36.307 [2024-10-09 03:17:19.585341] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:37.244 03:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:37.244 03:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:14:37.244 03:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.jEM 00:14:37.503 03:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:37.503 [2024-10-09 03:17:20.772420] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:37.762 TLSTESTn1 00:14:37.762 03:17:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:37.762 Running I/O for 10 seconds... 00:14:40.076 4352.00 IOPS, 17.00 MiB/s [2024-10-09T03:17:24.316Z] 4416.00 IOPS, 17.25 MiB/s [2024-10-09T03:17:25.253Z] 4421.33 IOPS, 17.27 MiB/s [2024-10-09T03:17:26.190Z] 4445.25 IOPS, 17.36 MiB/s [2024-10-09T03:17:27.126Z] 4460.20 IOPS, 17.42 MiB/s [2024-10-09T03:17:28.062Z] 4467.50 IOPS, 17.45 MiB/s [2024-10-09T03:17:29.004Z] 4471.29 IOPS, 17.47 MiB/s [2024-10-09T03:17:29.978Z] 4481.12 IOPS, 17.50 MiB/s [2024-10-09T03:17:31.356Z] 4494.22 IOPS, 17.56 MiB/s [2024-10-09T03:17:31.356Z] 4495.00 IOPS, 17.56 MiB/s 00:14:48.053 Latency(us) 00:14:48.053 [2024-10-09T03:17:31.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.053 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:48.053 Verification LBA range: start 0x0 length 0x2000 00:14:48.053 TLSTESTn1 : 10.02 4500.59 17.58 0.00 0.00 28391.13 5451.40 21209.83 00:14:48.053 [2024-10-09T03:17:31.356Z] =================================================================================================================== 00:14:48.053 [2024-10-09T03:17:31.356Z] Total : 4500.59 17.58 0.00 0.00 28391.13 5451.40 21209.83 00:14:48.053 { 00:14:48.053 "results": [ 00:14:48.053 { 00:14:48.053 "job": "TLSTESTn1", 00:14:48.053 "core_mask": "0x4", 00:14:48.053 "workload": "verify", 00:14:48.053 "status": "finished", 00:14:48.053 "verify_range": { 00:14:48.053 "start": 0, 00:14:48.053 "length": 8192 00:14:48.053 }, 00:14:48.053 "queue_depth": 128, 00:14:48.053 "io_size": 4096, 00:14:48.053 "runtime": 10.015577, 00:14:48.053 "iops": 4500.58943184202, 00:14:48.053 "mibps": 17.58042746813289, 00:14:48.053 "io_failed": 0, 00:14:48.053 "io_timeout": 0, 00:14:48.053 "avg_latency_us": 28391.129634798603, 00:14:48.053 "min_latency_us": 5451.403636363636, 00:14:48.053 "max_latency_us": 21209.832727272726 00:14:48.053 } 00:14:48.053 ], 00:14:48.053 "core_count": 1 00:14:48.053 } 00:14:48.053 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:14:48.053 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:14:48.053 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:14:48.053 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:14:48.053 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:14:48.053 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:48.053 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:14:48.053 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:14:48.053 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:14:48.053 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:48.053 nvmf_trace.0 00:14:48.053 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:14:48.053 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 73071 00:14:48.053 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 73071 ']' 00:14:48.053 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 73071 00:14:48.053 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:14:48.053 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:48.053 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73071 00:14:48.053 killing process with pid 73071 00:14:48.053 Received shutdown signal, test time was about 10.000000 seconds 00:14:48.053 00:14:48.053 Latency(us) 00:14:48.053 [2024-10-09T03:17:31.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.053 [2024-10-09T03:17:31.356Z] =================================================================================================================== 00:14:48.053 [2024-10-09T03:17:31.356Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:48.053 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:48.053 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:48.053 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73071' 00:14:48.053 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 73071 00:14:48.053 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 73071 00:14:48.312 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:14:48.312 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:48.312 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:14:48.312 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:48.312 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:14:48.312 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:48.312 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:48.312 rmmod nvme_tcp 00:14:48.312 rmmod nvme_fabrics 00:14:48.312 rmmod nvme_keyring 00:14:48.313 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:48.313 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:14:48.313 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:14:48.313 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 73035 ']' 00:14:48.313 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 73035 00:14:48.313 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 73035 ']' 00:14:48.313 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 73035 00:14:48.313 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:14:48.313 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:48.313 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73035 00:14:48.313 killing process with pid 73035 00:14:48.313 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:48.313 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:48.313 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73035' 00:14:48.313 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 73035 00:14:48.313 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 73035 00:14:48.572 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:48.572 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:48.572 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:48.572 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:14:48.572 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:14:48.572 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:14:48.572 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:48.572 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:48.572 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:48.572 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:48.572 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:48.572 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:48.572 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:48.572 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:48.572 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:48.572 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:48.572 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:48.572 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:48.831 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:48.831 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:48.831 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:48.831 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:48.831 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:48.831 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.831 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:48.831 03:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.831 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:14:48.831 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.jEM 00:14:48.831 ************************************ 00:14:48.831 END TEST nvmf_fips 00:14:48.831 ************************************ 00:14:48.831 00:14:48.831 real 0m15.188s 00:14:48.831 user 0m21.298s 00:14:48.831 sys 0m5.718s 00:14:48.831 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:48.831 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:48.831 03:17:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:14:48.831 03:17:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:48.831 03:17:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:48.831 03:17:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:48.831 ************************************ 00:14:48.831 START TEST nvmf_control_msg_list 00:14:48.831 ************************************ 00:14:48.831 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:14:49.091 * Looking for test storage... 00:14:49.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:49.091 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:49.091 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:14:49.091 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:49.091 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:49.091 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:49.091 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:49.091 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:49.091 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:14:49.091 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:14:49.091 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:14:49.091 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:14:49.091 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:14:49.091 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:14:49.091 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:14:49.091 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:49.091 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:14:49.091 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:14:49.091 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:49.091 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:49.091 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:14:49.091 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:14:49.091 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:49.091 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:14:49.091 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:14:49.091 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:14:49.091 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:14:49.091 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:49.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.092 --rc genhtml_branch_coverage=1 00:14:49.092 --rc genhtml_function_coverage=1 00:14:49.092 --rc genhtml_legend=1 00:14:49.092 --rc geninfo_all_blocks=1 00:14:49.092 --rc geninfo_unexecuted_blocks=1 00:14:49.092 00:14:49.092 ' 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:49.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.092 --rc genhtml_branch_coverage=1 00:14:49.092 --rc genhtml_function_coverage=1 00:14:49.092 --rc genhtml_legend=1 00:14:49.092 --rc geninfo_all_blocks=1 00:14:49.092 --rc geninfo_unexecuted_blocks=1 00:14:49.092 00:14:49.092 ' 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:49.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.092 --rc genhtml_branch_coverage=1 00:14:49.092 --rc genhtml_function_coverage=1 00:14:49.092 --rc genhtml_legend=1 00:14:49.092 --rc geninfo_all_blocks=1 00:14:49.092 --rc geninfo_unexecuted_blocks=1 00:14:49.092 00:14:49.092 ' 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:49.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.092 --rc genhtml_branch_coverage=1 00:14:49.092 --rc genhtml_function_coverage=1 00:14:49.092 --rc genhtml_legend=1 00:14:49.092 --rc geninfo_all_blocks=1 00:14:49.092 --rc geninfo_unexecuted_blocks=1 00:14:49.092 00:14:49.092 ' 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:49.092 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@458 -- # nvmf_veth_init 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:49.092 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:49.093 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:49.093 Cannot find device "nvmf_init_br" 00:14:49.093 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:14:49.093 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:49.093 Cannot find device "nvmf_init_br2" 00:14:49.093 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:14:49.093 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:49.093 Cannot find device "nvmf_tgt_br" 00:14:49.093 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:14:49.093 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:49.093 Cannot find device "nvmf_tgt_br2" 00:14:49.093 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:14:49.093 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:49.093 Cannot find device "nvmf_init_br" 00:14:49.093 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:14:49.093 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:49.093 Cannot find device "nvmf_init_br2" 00:14:49.093 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:14:49.093 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:49.093 Cannot find device "nvmf_tgt_br" 00:14:49.093 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:14:49.093 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:49.093 Cannot find device "nvmf_tgt_br2" 00:14:49.093 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:14:49.093 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:49.352 Cannot find device "nvmf_br" 00:14:49.352 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:14:49.352 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:49.352 Cannot find device "nvmf_init_if" 00:14:49.352 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:14:49.352 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:49.352 Cannot find device "nvmf_init_if2" 00:14:49.352 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:14:49.352 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:49.352 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:49.352 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:14:49.352 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:49.352 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:49.352 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:14:49.352 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:49.352 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:49.352 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:49.352 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:49.352 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:49.352 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:49.352 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:49.352 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:49.352 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:49.352 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:49.352 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:49.352 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:49.352 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:49.352 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:49.352 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:49.352 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:49.353 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:49.353 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:49.353 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:49.353 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:49.353 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:49.353 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:49.353 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:49.353 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:49.353 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:49.353 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:49.353 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:49.353 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:49.353 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:49.353 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:49.353 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:49.353 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:49.353 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:49.353 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:49.353 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:14:49.353 00:14:49.353 --- 10.0.0.3 ping statistics --- 00:14:49.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.353 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:14:49.353 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:49.353 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:49.353 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:14:49.353 00:14:49.353 --- 10.0.0.4 ping statistics --- 00:14:49.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.353 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:14:49.353 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:49.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:49.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:14:49.353 00:14:49.353 --- 10.0.0.1 ping statistics --- 00:14:49.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.353 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:14:49.353 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:49.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:49.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:14:49.353 00:14:49.353 --- 10.0.0.2 ping statistics --- 00:14:49.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.353 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:14:49.612 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:49.612 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # return 0 00:14:49.612 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:49.612 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:49.612 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:49.612 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:49.612 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:49.612 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:49.612 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:49.612 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:14:49.612 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:49.612 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:49.612 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:49.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.612 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=73462 00:14:49.612 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:49.612 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 73462 00:14:49.612 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 73462 ']' 00:14:49.612 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.612 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:49.612 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.612 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:49.612 03:17:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:49.612 [2024-10-09 03:17:32.734586] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:14:49.612 [2024-10-09 03:17:32.734725] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.612 [2024-10-09 03:17:32.877466] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.871 [2024-10-09 03:17:32.987754] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.871 [2024-10-09 03:17:32.987820] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.871 [2024-10-09 03:17:32.987847] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:49.871 [2024-10-09 03:17:32.987858] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:49.871 [2024-10-09 03:17:32.987867] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.871 [2024-10-09 03:17:32.988377] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.871 [2024-10-09 03:17:33.046942] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:50.439 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:50.439 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:14:50.439 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:50.439 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:50.439 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:50.699 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:50.699 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:14:50.699 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:14:50.699 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:14:50.699 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.699 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:50.699 [2024-10-09 03:17:33.750845] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:50.699 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.699 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:14:50.699 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.699 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:50.699 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.699 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:14:50.699 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.699 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:50.699 Malloc0 00:14:50.699 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.699 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:14:50.699 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.699 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:50.699 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.699 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:50.699 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.699 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:50.699 [2024-10-09 03:17:33.802395] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:50.699 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.699 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73494 00:14:50.699 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:50.699 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73495 00:14:50.699 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:50.699 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73496 00:14:50.699 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73494 00:14:50.699 03:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:50.699 [2024-10-09 03:17:33.976882] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:50.699 [2024-10-09 03:17:33.977384] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:50.699 [2024-10-09 03:17:33.987003] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:52.078 Initializing NVMe Controllers 00:14:52.078 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:52.078 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:14:52.078 Initialization complete. Launching workers. 00:14:52.078 ======================================================== 00:14:52.078 Latency(us) 00:14:52.078 Device Information : IOPS MiB/s Average min max 00:14:52.078 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3610.00 14.10 276.68 187.59 546.52 00:14:52.078 ======================================================== 00:14:52.078 Total : 3610.00 14.10 276.68 187.59 546.52 00:14:52.078 00:14:52.078 Initializing NVMe Controllers 00:14:52.078 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:52.078 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:14:52.078 Initialization complete. Launching workers. 00:14:52.078 ======================================================== 00:14:52.078 Latency(us) 00:14:52.078 Device Information : IOPS MiB/s Average min max 00:14:52.078 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3607.95 14.09 276.73 185.73 2016.39 00:14:52.078 ======================================================== 00:14:52.078 Total : 3607.95 14.09 276.73 185.73 2016.39 00:14:52.078 00:14:52.078 Initializing NVMe Controllers 00:14:52.078 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:52.078 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:14:52.078 Initialization complete. Launching workers. 00:14:52.079 ======================================================== 00:14:52.079 Latency(us) 00:14:52.079 Device Information : IOPS MiB/s Average min max 00:14:52.079 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3646.00 14.24 273.92 108.64 793.88 00:14:52.079 ======================================================== 00:14:52.079 Total : 3646.00 14.24 273.92 108.64 793.88 00:14:52.079 00:14:52.079 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73495 00:14:52.079 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73496 00:14:52.079 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:52.079 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:14:52.079 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:52.079 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:14:52.079 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:52.079 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:14:52.079 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:52.079 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:52.079 rmmod nvme_tcp 00:14:52.079 rmmod nvme_fabrics 00:14:52.079 rmmod nvme_keyring 00:14:52.079 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:52.079 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:14:52.079 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:14:52.079 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 73462 ']' 00:14:52.079 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 73462 00:14:52.079 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 73462 ']' 00:14:52.079 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 73462 00:14:52.079 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:14:52.079 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:52.079 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73462 00:14:52.079 killing process with pid 73462 00:14:52.079 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:52.079 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:52.079 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73462' 00:14:52.079 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 73462 00:14:52.079 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 73462 00:14:52.338 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:52.338 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:52.338 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:52.338 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:14:52.338 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:14:52.338 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:52.339 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:14:52.339 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:52.339 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:52.339 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:52.339 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:52.339 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:52.339 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:52.339 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:52.339 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:52.339 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:52.339 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:52.339 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:52.339 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:52.339 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:52.339 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:52.339 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:52.339 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:52.339 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.339 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:52.339 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.339 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:14:52.598 00:14:52.598 real 0m3.568s 00:14:52.598 user 0m5.579s 00:14:52.598 sys 0m1.366s 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:52.598 ************************************ 00:14:52.598 END TEST nvmf_control_msg_list 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:52.598 ************************************ 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:52.598 ************************************ 00:14:52.598 START TEST nvmf_wait_for_buf 00:14:52.598 ************************************ 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:14:52.598 * Looking for test storage... 00:14:52.598 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:52.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.598 --rc genhtml_branch_coverage=1 00:14:52.598 --rc genhtml_function_coverage=1 00:14:52.598 --rc genhtml_legend=1 00:14:52.598 --rc geninfo_all_blocks=1 00:14:52.598 --rc geninfo_unexecuted_blocks=1 00:14:52.598 00:14:52.598 ' 00:14:52.598 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:52.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.598 --rc genhtml_branch_coverage=1 00:14:52.598 --rc genhtml_function_coverage=1 00:14:52.598 --rc genhtml_legend=1 00:14:52.599 --rc geninfo_all_blocks=1 00:14:52.599 --rc geninfo_unexecuted_blocks=1 00:14:52.599 00:14:52.599 ' 00:14:52.599 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:52.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.599 --rc genhtml_branch_coverage=1 00:14:52.599 --rc genhtml_function_coverage=1 00:14:52.599 --rc genhtml_legend=1 00:14:52.599 --rc geninfo_all_blocks=1 00:14:52.599 --rc geninfo_unexecuted_blocks=1 00:14:52.599 00:14:52.599 ' 00:14:52.599 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:52.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.599 --rc genhtml_branch_coverage=1 00:14:52.599 --rc genhtml_function_coverage=1 00:14:52.599 --rc genhtml_legend=1 00:14:52.599 --rc geninfo_all_blocks=1 00:14:52.599 --rc geninfo_unexecuted_blocks=1 00:14:52.599 00:14:52.599 ' 00:14:52.599 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:52.599 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:14:52.599 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.599 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.599 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.599 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.599 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.599 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.599 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.599 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.599 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.858 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.858 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:14:52.858 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:14:52.858 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.858 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.858 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:52.858 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:52.858 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:52.858 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:14:52.858 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.858 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.858 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.858 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.858 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.858 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.858 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:52.859 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@458 -- # nvmf_veth_init 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:52.859 Cannot find device "nvmf_init_br" 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:52.859 Cannot find device "nvmf_init_br2" 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:52.859 Cannot find device "nvmf_tgt_br" 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:52.859 Cannot find device "nvmf_tgt_br2" 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:52.859 Cannot find device "nvmf_init_br" 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:14:52.859 03:17:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:52.859 Cannot find device "nvmf_init_br2" 00:14:52.859 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:14:52.859 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:52.859 Cannot find device "nvmf_tgt_br" 00:14:52.859 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:14:52.859 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:52.859 Cannot find device "nvmf_tgt_br2" 00:14:52.859 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:14:52.859 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:52.859 Cannot find device "nvmf_br" 00:14:52.859 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:14:52.859 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:52.859 Cannot find device "nvmf_init_if" 00:14:52.859 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:14:52.859 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:52.859 Cannot find device "nvmf_init_if2" 00:14:52.859 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:14:52.859 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:52.859 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:52.859 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:14:52.859 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:52.859 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:52.859 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:14:52.859 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:52.859 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:52.859 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:52.859 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:52.859 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:52.859 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:53.118 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:53.118 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:53.118 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:53.118 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:53.119 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:53.119 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:14:53.119 00:14:53.119 --- 10.0.0.3 ping statistics --- 00:14:53.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.119 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:53.119 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:53.119 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:14:53.119 00:14:53.119 --- 10.0.0.4 ping statistics --- 00:14:53.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.119 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:53.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:53.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:14:53.119 00:14:53.119 --- 10.0.0.1 ping statistics --- 00:14:53.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.119 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:53.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:53.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:14:53.119 00:14:53.119 --- 10.0.0.2 ping statistics --- 00:14:53.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.119 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # return 0 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=73737 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 73737 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 73737 ']' 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:53.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:53.119 03:17:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:53.378 [2024-10-09 03:17:36.428103] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:14:53.378 [2024-10-09 03:17:36.428201] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.378 [2024-10-09 03:17:36.566859] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.378 [2024-10-09 03:17:36.669657] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.378 [2024-10-09 03:17:36.669736] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.378 [2024-10-09 03:17:36.669762] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:53.378 [2024-10-09 03:17:36.669772] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:53.378 [2024-10-09 03:17:36.669782] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.378 [2024-10-09 03:17:36.670362] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.376 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:54.376 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:14:54.376 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:54.376 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:54.376 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:54.377 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:54.377 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:14:54.377 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:14:54.377 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:14:54.377 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.377 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:54.377 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.377 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:14:54.377 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.377 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:54.377 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.377 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:14:54.377 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.377 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:54.377 [2024-10-09 03:17:37.590107] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:54.377 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.377 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:14:54.377 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.377 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:54.377 Malloc0 00:14:54.377 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.377 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:14:54.377 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.377 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:54.377 [2024-10-09 03:17:37.658254] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:54.377 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.377 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:14:54.377 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.377 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:54.377 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.377 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:14:54.377 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.377 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:54.636 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.636 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:54.636 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.636 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:54.636 [2024-10-09 03:17:37.686397] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:54.636 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.636 03:17:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:54.636 [2024-10-09 03:17:37.867228] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:56.012 Initializing NVMe Controllers 00:14:56.012 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:56.012 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:14:56.012 Initialization complete. Launching workers. 00:14:56.012 ======================================================== 00:14:56.012 Latency(us) 00:14:56.012 Device Information : IOPS MiB/s Average min max 00:14:56.012 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 460.99 57.62 8693.96 5016.96 16040.87 00:14:56.012 ======================================================== 00:14:56.012 Total : 460.99 57.62 8693.96 5016.96 16040.87 00:14:56.012 00:14:56.012 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:14:56.012 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:14:56.012 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.012 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:56.012 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.013 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4370 00:14:56.013 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4370 -eq 0 ]] 00:14:56.013 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:56.013 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:14:56.013 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:56.013 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:14:56.013 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:56.013 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:14:56.013 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:56.013 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:56.013 rmmod nvme_tcp 00:14:56.013 rmmod nvme_fabrics 00:14:56.013 rmmod nvme_keyring 00:14:56.271 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:56.271 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:14:56.271 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:14:56.271 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 73737 ']' 00:14:56.271 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 73737 00:14:56.271 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 73737 ']' 00:14:56.271 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 73737 00:14:56.271 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:14:56.271 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:56.271 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73737 00:14:56.271 killing process with pid 73737 00:14:56.271 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:56.271 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:56.271 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73737' 00:14:56.271 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 73737 00:14:56.271 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 73737 00:14:56.271 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:56.271 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:56.271 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:56.271 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:14:56.271 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:14:56.271 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:56.271 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:14:56.271 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:56.271 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:56.271 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:56.530 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:56.530 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:56.531 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:56.531 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:56.531 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:56.531 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:56.531 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:56.531 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:56.531 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:56.531 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:56.531 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:56.531 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:56.531 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:56.531 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.531 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:56.531 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.531 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:14:56.531 00:14:56.531 real 0m4.137s 00:14:56.531 user 0m3.696s 00:14:56.531 sys 0m0.861s 00:14:56.531 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:56.531 03:17:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:56.790 ************************************ 00:14:56.790 END TEST nvmf_wait_for_buf 00:14:56.790 ************************************ 00:14:56.790 03:17:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:14:56.790 03:17:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:14:56.790 03:17:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:56.790 ************************************ 00:14:56.790 END TEST nvmf_target_extra 00:14:56.790 ************************************ 00:14:56.790 00:14:56.790 real 5m10.176s 00:14:56.790 user 10m48.265s 00:14:56.790 sys 1m9.274s 00:14:56.790 03:17:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:56.790 03:17:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:56.790 03:17:39 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:14:56.790 03:17:39 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:56.790 03:17:39 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:56.790 03:17:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:56.790 ************************************ 00:14:56.790 START TEST nvmf_host 00:14:56.790 ************************************ 00:14:56.790 03:17:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:14:56.790 * Looking for test storage... 00:14:56.790 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:14:56.790 03:17:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:56.790 03:17:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:14:56.790 03:17:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:57.050 03:17:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:57.050 03:17:40 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:57.050 03:17:40 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:57.050 03:17:40 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:57.050 03:17:40 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:14:57.050 03:17:40 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:14:57.050 03:17:40 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:57.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.051 --rc genhtml_branch_coverage=1 00:14:57.051 --rc genhtml_function_coverage=1 00:14:57.051 --rc genhtml_legend=1 00:14:57.051 --rc geninfo_all_blocks=1 00:14:57.051 --rc geninfo_unexecuted_blocks=1 00:14:57.051 00:14:57.051 ' 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:57.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.051 --rc genhtml_branch_coverage=1 00:14:57.051 --rc genhtml_function_coverage=1 00:14:57.051 --rc genhtml_legend=1 00:14:57.051 --rc geninfo_all_blocks=1 00:14:57.051 --rc geninfo_unexecuted_blocks=1 00:14:57.051 00:14:57.051 ' 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:57.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.051 --rc genhtml_branch_coverage=1 00:14:57.051 --rc genhtml_function_coverage=1 00:14:57.051 --rc genhtml_legend=1 00:14:57.051 --rc geninfo_all_blocks=1 00:14:57.051 --rc geninfo_unexecuted_blocks=1 00:14:57.051 00:14:57.051 ' 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:57.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.051 --rc genhtml_branch_coverage=1 00:14:57.051 --rc genhtml_function_coverage=1 00:14:57.051 --rc genhtml_legend=1 00:14:57.051 --rc geninfo_all_blocks=1 00:14:57.051 --rc geninfo_unexecuted_blocks=1 00:14:57.051 00:14:57.051 ' 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:57.051 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:57.051 ************************************ 00:14:57.051 START TEST nvmf_identify 00:14:57.051 ************************************ 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:57.051 * Looking for test storage... 00:14:57.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:57.051 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:57.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.052 --rc genhtml_branch_coverage=1 00:14:57.052 --rc genhtml_function_coverage=1 00:14:57.052 --rc genhtml_legend=1 00:14:57.052 --rc geninfo_all_blocks=1 00:14:57.052 --rc geninfo_unexecuted_blocks=1 00:14:57.052 00:14:57.052 ' 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:57.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.052 --rc genhtml_branch_coverage=1 00:14:57.052 --rc genhtml_function_coverage=1 00:14:57.052 --rc genhtml_legend=1 00:14:57.052 --rc geninfo_all_blocks=1 00:14:57.052 --rc geninfo_unexecuted_blocks=1 00:14:57.052 00:14:57.052 ' 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:57.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.052 --rc genhtml_branch_coverage=1 00:14:57.052 --rc genhtml_function_coverage=1 00:14:57.052 --rc genhtml_legend=1 00:14:57.052 --rc geninfo_all_blocks=1 00:14:57.052 --rc geninfo_unexecuted_blocks=1 00:14:57.052 00:14:57.052 ' 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:57.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.052 --rc genhtml_branch_coverage=1 00:14:57.052 --rc genhtml_function_coverage=1 00:14:57.052 --rc genhtml_legend=1 00:14:57.052 --rc geninfo_all_blocks=1 00:14:57.052 --rc geninfo_unexecuted_blocks=1 00:14:57.052 00:14:57.052 ' 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:57.052 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@458 -- # nvmf_veth_init 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:57.052 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:57.312 Cannot find device "nvmf_init_br" 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:57.312 Cannot find device "nvmf_init_br2" 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:57.312 Cannot find device "nvmf_tgt_br" 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:57.312 Cannot find device "nvmf_tgt_br2" 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:57.312 Cannot find device "nvmf_init_br" 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:57.312 Cannot find device "nvmf_init_br2" 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:57.312 Cannot find device "nvmf_tgt_br" 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:57.312 Cannot find device "nvmf_tgt_br2" 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:57.312 Cannot find device "nvmf_br" 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:57.312 Cannot find device "nvmf_init_if" 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:57.312 Cannot find device "nvmf_init_if2" 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:57.312 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:57.312 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:57.312 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:57.573 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:57.573 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:14:57.573 00:14:57.573 --- 10.0.0.3 ping statistics --- 00:14:57.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.573 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:57.573 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:57.573 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:14:57.573 00:14:57.573 --- 10.0.0.4 ping statistics --- 00:14:57.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.573 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:57.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:57.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:14:57.573 00:14:57.573 --- 10.0.0.1 ping statistics --- 00:14:57.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.573 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:57.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:57.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:14:57.573 00:14:57.573 --- 10.0.0.2 ping statistics --- 00:14:57.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.573 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # return 0 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:57.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74068 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74068 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 74068 ']' 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:57.573 03:17:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:57.573 [2024-10-09 03:17:40.797500] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:14:57.573 [2024-10-09 03:17:40.797783] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.832 [2024-10-09 03:17:40.935538] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:57.832 [2024-10-09 03:17:41.039990] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.832 [2024-10-09 03:17:41.040336] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.832 [2024-10-09 03:17:41.040599] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.832 [2024-10-09 03:17:41.040805] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.832 [2024-10-09 03:17:41.040915] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.832 [2024-10-09 03:17:41.042400] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.832 [2024-10-09 03:17:41.042508] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.832 [2024-10-09 03:17:41.042582] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:14:57.832 [2024-10-09 03:17:41.042583] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.832 [2024-10-09 03:17:41.102734] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:58.092 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:58.092 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:14:58.092 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:58.092 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.092 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:58.092 [2024-10-09 03:17:41.188103] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:58.092 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.092 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:58.092 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:58.092 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:58.092 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:58.092 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.092 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:58.092 Malloc0 00:14:58.092 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.092 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:58.092 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.092 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:58.092 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.092 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:58.092 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.092 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:58.092 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.092 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:58.092 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.092 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:58.092 [2024-10-09 03:17:41.292025] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:58.092 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.092 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:58.092 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.092 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:58.092 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.092 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:58.092 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.092 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:58.092 [ 00:14:58.092 { 00:14:58.092 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:58.092 "subtype": "Discovery", 00:14:58.092 "listen_addresses": [ 00:14:58.092 { 00:14:58.092 "trtype": "TCP", 00:14:58.092 "adrfam": "IPv4", 00:14:58.092 "traddr": "10.0.0.3", 00:14:58.092 "trsvcid": "4420" 00:14:58.092 } 00:14:58.092 ], 00:14:58.092 "allow_any_host": true, 00:14:58.092 "hosts": [] 00:14:58.092 }, 00:14:58.092 { 00:14:58.092 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:58.092 "subtype": "NVMe", 00:14:58.092 "listen_addresses": [ 00:14:58.092 { 00:14:58.092 "trtype": "TCP", 00:14:58.092 "adrfam": "IPv4", 00:14:58.092 "traddr": "10.0.0.3", 00:14:58.092 "trsvcid": "4420" 00:14:58.092 } 00:14:58.092 ], 00:14:58.092 "allow_any_host": true, 00:14:58.092 "hosts": [], 00:14:58.092 "serial_number": "SPDK00000000000001", 00:14:58.092 "model_number": "SPDK bdev Controller", 00:14:58.092 "max_namespaces": 32, 00:14:58.092 "min_cntlid": 1, 00:14:58.092 "max_cntlid": 65519, 00:14:58.092 "namespaces": [ 00:14:58.092 { 00:14:58.092 "nsid": 1, 00:14:58.092 "bdev_name": "Malloc0", 00:14:58.092 "name": "Malloc0", 00:14:58.092 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:58.092 "eui64": "ABCDEF0123456789", 00:14:58.092 "uuid": "610566d1-66f9-4a6c-8816-afdebfca08cf" 00:14:58.092 } 00:14:58.092 ] 00:14:58.092 } 00:14:58.092 ] 00:14:58.092 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.092 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:58.092 [2024-10-09 03:17:41.349942] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:14:58.092 [2024-10-09 03:17:41.350019] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74096 ] 00:14:58.354 [2024-10-09 03:17:41.489965] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:14:58.354 [2024-10-09 03:17:41.490049] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:58.354 [2024-10-09 03:17:41.490057] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:58.354 [2024-10-09 03:17:41.490077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:58.354 [2024-10-09 03:17:41.490088] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:58.354 [2024-10-09 03:17:41.490378] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:14:58.354 [2024-10-09 03:17:41.490469] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x14e0750 0 00:14:58.354 [2024-10-09 03:17:41.495118] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:58.354 [2024-10-09 03:17:41.495140] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:58.354 [2024-10-09 03:17:41.495146] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:58.354 [2024-10-09 03:17:41.495150] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:58.354 [2024-10-09 03:17:41.495190] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.354 [2024-10-09 03:17:41.495198] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.354 [2024-10-09 03:17:41.495202] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e0750) 00:14:58.354 [2024-10-09 03:17:41.495217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:58.354 [2024-10-09 03:17:41.495248] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544840, cid 0, qid 0 00:14:58.354 [2024-10-09 03:17:41.503114] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.354 [2024-10-09 03:17:41.503132] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.354 [2024-10-09 03:17:41.503137] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.354 [2024-10-09 03:17:41.503143] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544840) on tqpair=0x14e0750 00:14:58.354 [2024-10-09 03:17:41.503157] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:58.354 [2024-10-09 03:17:41.503164] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:14:58.354 [2024-10-09 03:17:41.503171] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:14:58.354 [2024-10-09 03:17:41.503188] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.354 [2024-10-09 03:17:41.503194] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.354 [2024-10-09 03:17:41.503198] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e0750) 00:14:58.354 [2024-10-09 03:17:41.503208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.354 [2024-10-09 03:17:41.503235] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544840, cid 0, qid 0 00:14:58.354 [2024-10-09 03:17:41.503301] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.354 [2024-10-09 03:17:41.503309] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.354 [2024-10-09 03:17:41.503313] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.354 [2024-10-09 03:17:41.503317] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544840) on tqpair=0x14e0750 00:14:58.354 [2024-10-09 03:17:41.503324] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:14:58.354 [2024-10-09 03:17:41.503332] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:14:58.354 [2024-10-09 03:17:41.503340] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.354 [2024-10-09 03:17:41.503345] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.354 [2024-10-09 03:17:41.503349] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e0750) 00:14:58.354 [2024-10-09 03:17:41.503357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.354 [2024-10-09 03:17:41.503378] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544840, cid 0, qid 0 00:14:58.354 [2024-10-09 03:17:41.503429] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.354 [2024-10-09 03:17:41.503436] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.354 [2024-10-09 03:17:41.503440] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.354 [2024-10-09 03:17:41.503444] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544840) on tqpair=0x14e0750 00:14:58.354 [2024-10-09 03:17:41.503461] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:14:58.354 [2024-10-09 03:17:41.503469] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:14:58.354 [2024-10-09 03:17:41.503477] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.354 [2024-10-09 03:17:41.503482] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.354 [2024-10-09 03:17:41.503486] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e0750) 00:14:58.354 [2024-10-09 03:17:41.503493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.354 [2024-10-09 03:17:41.503512] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544840, cid 0, qid 0 00:14:58.354 [2024-10-09 03:17:41.503562] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.354 [2024-10-09 03:17:41.503569] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.354 [2024-10-09 03:17:41.503573] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.354 [2024-10-09 03:17:41.503577] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544840) on tqpair=0x14e0750 00:14:58.354 [2024-10-09 03:17:41.503583] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:58.354 [2024-10-09 03:17:41.503594] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.354 [2024-10-09 03:17:41.503599] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.354 [2024-10-09 03:17:41.503602] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e0750) 00:14:58.354 [2024-10-09 03:17:41.503610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.354 [2024-10-09 03:17:41.503628] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544840, cid 0, qid 0 00:14:58.354 [2024-10-09 03:17:41.503672] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.354 [2024-10-09 03:17:41.503679] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.354 [2024-10-09 03:17:41.503682] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.354 [2024-10-09 03:17:41.503687] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544840) on tqpair=0x14e0750 00:14:58.354 [2024-10-09 03:17:41.503692] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:14:58.354 [2024-10-09 03:17:41.503698] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:14:58.354 [2024-10-09 03:17:41.503706] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:58.354 [2024-10-09 03:17:41.503811] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:14:58.354 [2024-10-09 03:17:41.503817] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:58.354 [2024-10-09 03:17:41.503827] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.354 [2024-10-09 03:17:41.503832] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.354 [2024-10-09 03:17:41.503836] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e0750) 00:14:58.354 [2024-10-09 03:17:41.503843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.354 [2024-10-09 03:17:41.503863] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544840, cid 0, qid 0 00:14:58.354 [2024-10-09 03:17:41.503925] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.354 [2024-10-09 03:17:41.503932] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.354 [2024-10-09 03:17:41.503936] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.354 [2024-10-09 03:17:41.503940] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544840) on tqpair=0x14e0750 00:14:58.354 [2024-10-09 03:17:41.503945] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:58.354 [2024-10-09 03:17:41.503956] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.354 [2024-10-09 03:17:41.503961] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.354 [2024-10-09 03:17:41.503965] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e0750) 00:14:58.354 [2024-10-09 03:17:41.503973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.354 [2024-10-09 03:17:41.503992] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544840, cid 0, qid 0 00:14:58.354 [2024-10-09 03:17:41.504041] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.354 [2024-10-09 03:17:41.504061] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.354 [2024-10-09 03:17:41.504066] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.354 [2024-10-09 03:17:41.504071] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544840) on tqpair=0x14e0750 00:14:58.354 [2024-10-09 03:17:41.504076] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:58.354 [2024-10-09 03:17:41.504081] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:14:58.354 [2024-10-09 03:17:41.504090] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:14:58.354 [2024-10-09 03:17:41.504110] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:14:58.354 [2024-10-09 03:17:41.504122] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.354 [2024-10-09 03:17:41.504126] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e0750) 00:14:58.354 [2024-10-09 03:17:41.504134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.355 [2024-10-09 03:17:41.504156] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544840, cid 0, qid 0 00:14:58.355 [2024-10-09 03:17:41.504263] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:58.355 [2024-10-09 03:17:41.504271] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:58.355 [2024-10-09 03:17:41.504275] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:58.355 [2024-10-09 03:17:41.504279] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14e0750): datao=0, datal=4096, cccid=0 00:14:58.355 [2024-10-09 03:17:41.504284] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1544840) on tqpair(0x14e0750): expected_datao=0, payload_size=4096 00:14:58.355 [2024-10-09 03:17:41.504290] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.355 [2024-10-09 03:17:41.504298] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:58.355 [2024-10-09 03:17:41.504302] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:58.355 [2024-10-09 03:17:41.504311] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.355 [2024-10-09 03:17:41.504318] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.355 [2024-10-09 03:17:41.504321] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.355 [2024-10-09 03:17:41.504326] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544840) on tqpair=0x14e0750 00:14:58.355 [2024-10-09 03:17:41.504335] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:14:58.355 [2024-10-09 03:17:41.504340] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:14:58.355 [2024-10-09 03:17:41.504345] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:14:58.355 [2024-10-09 03:17:41.504350] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:14:58.355 [2024-10-09 03:17:41.504356] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:14:58.355 [2024-10-09 03:17:41.504361] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:14:58.355 [2024-10-09 03:17:41.504370] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:14:58.355 [2024-10-09 03:17:41.504382] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.355 [2024-10-09 03:17:41.504387] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.355 [2024-10-09 03:17:41.504392] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e0750) 00:14:58.355 [2024-10-09 03:17:41.504400] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:58.355 [2024-10-09 03:17:41.504420] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544840, cid 0, qid 0 00:14:58.355 [2024-10-09 03:17:41.504477] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.355 [2024-10-09 03:17:41.504484] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.355 [2024-10-09 03:17:41.504488] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.355 [2024-10-09 03:17:41.504500] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544840) on tqpair=0x14e0750 00:14:58.355 [2024-10-09 03:17:41.504508] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.355 [2024-10-09 03:17:41.504512] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.355 [2024-10-09 03:17:41.504516] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e0750) 00:14:58.355 [2024-10-09 03:17:41.504523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.355 [2024-10-09 03:17:41.504530] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.355 [2024-10-09 03:17:41.504534] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.355 [2024-10-09 03:17:41.504538] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x14e0750) 00:14:58.355 [2024-10-09 03:17:41.504544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.355 [2024-10-09 03:17:41.504551] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.355 [2024-10-09 03:17:41.504555] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.355 [2024-10-09 03:17:41.504559] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x14e0750) 00:14:58.355 [2024-10-09 03:17:41.504565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.355 [2024-10-09 03:17:41.504572] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.355 [2024-10-09 03:17:41.504576] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.355 [2024-10-09 03:17:41.504579] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e0750) 00:14:58.355 [2024-10-09 03:17:41.504585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.355 [2024-10-09 03:17:41.504591] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:14:58.355 [2024-10-09 03:17:41.504604] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:58.355 [2024-10-09 03:17:41.504612] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.355 [2024-10-09 03:17:41.504616] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14e0750) 00:14:58.355 [2024-10-09 03:17:41.504623] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.355 [2024-10-09 03:17:41.504651] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544840, cid 0, qid 0 00:14:58.355 [2024-10-09 03:17:41.504658] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15449c0, cid 1, qid 0 00:14:58.355 [2024-10-09 03:17:41.504663] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544b40, cid 2, qid 0 00:14:58.355 [2024-10-09 03:17:41.504669] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544cc0, cid 3, qid 0 00:14:58.355 [2024-10-09 03:17:41.504674] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544e40, cid 4, qid 0 00:14:58.355 [2024-10-09 03:17:41.504759] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.355 [2024-10-09 03:17:41.504766] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.355 [2024-10-09 03:17:41.504770] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.355 [2024-10-09 03:17:41.504774] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544e40) on tqpair=0x14e0750 00:14:58.355 [2024-10-09 03:17:41.504780] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:14:58.355 [2024-10-09 03:17:41.504785] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:14:58.355 [2024-10-09 03:17:41.504797] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.355 [2024-10-09 03:17:41.504802] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14e0750) 00:14:58.355 [2024-10-09 03:17:41.504809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.355 [2024-10-09 03:17:41.504828] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544e40, cid 4, qid 0 00:14:58.355 [2024-10-09 03:17:41.504883] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:58.355 [2024-10-09 03:17:41.504890] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:58.355 [2024-10-09 03:17:41.504894] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:58.355 [2024-10-09 03:17:41.504898] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14e0750): datao=0, datal=4096, cccid=4 00:14:58.355 [2024-10-09 03:17:41.504903] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1544e40) on tqpair(0x14e0750): expected_datao=0, payload_size=4096 00:14:58.355 [2024-10-09 03:17:41.504908] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.355 [2024-10-09 03:17:41.504915] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:58.355 [2024-10-09 03:17:41.504919] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:58.355 [2024-10-09 03:17:41.504928] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.355 [2024-10-09 03:17:41.504934] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.355 [2024-10-09 03:17:41.504938] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.355 [2024-10-09 03:17:41.504942] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544e40) on tqpair=0x14e0750 00:14:58.355 [2024-10-09 03:17:41.504956] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:14:58.355 [2024-10-09 03:17:41.504984] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.355 [2024-10-09 03:17:41.504990] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14e0750) 00:14:58.355 [2024-10-09 03:17:41.504997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.355 [2024-10-09 03:17:41.505005] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.355 [2024-10-09 03:17:41.505010] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.355 [2024-10-09 03:17:41.505014] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14e0750) 00:14:58.355 [2024-10-09 03:17:41.505020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.355 [2024-10-09 03:17:41.505045] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544e40, cid 4, qid 0 00:14:58.355 [2024-10-09 03:17:41.505066] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544fc0, cid 5, qid 0 00:14:58.355 [2024-10-09 03:17:41.505156] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:58.355 [2024-10-09 03:17:41.505163] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:58.355 [2024-10-09 03:17:41.505167] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:58.355 [2024-10-09 03:17:41.505171] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14e0750): datao=0, datal=1024, cccid=4 00:14:58.355 [2024-10-09 03:17:41.505176] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1544e40) on tqpair(0x14e0750): expected_datao=0, payload_size=1024 00:14:58.355 [2024-10-09 03:17:41.505181] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.355 [2024-10-09 03:17:41.505188] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:58.355 [2024-10-09 03:17:41.505192] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:58.355 [2024-10-09 03:17:41.505198] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.355 [2024-10-09 03:17:41.505204] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.355 [2024-10-09 03:17:41.505208] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.355 [2024-10-09 03:17:41.505212] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544fc0) on tqpair=0x14e0750 00:14:58.355 [2024-10-09 03:17:41.505231] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.355 [2024-10-09 03:17:41.505239] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.355 [2024-10-09 03:17:41.505243] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.355 [2024-10-09 03:17:41.505247] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544e40) on tqpair=0x14e0750 00:14:58.355 [2024-10-09 03:17:41.505259] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.355 [2024-10-09 03:17:41.505264] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14e0750) 00:14:58.355 [2024-10-09 03:17:41.505272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.356 [2024-10-09 03:17:41.505296] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544e40, cid 4, qid 0 00:14:58.356 [2024-10-09 03:17:41.505369] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:58.356 [2024-10-09 03:17:41.505376] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:58.356 [2024-10-09 03:17:41.505379] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:58.356 [2024-10-09 03:17:41.505383] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14e0750): datao=0, datal=3072, cccid=4 00:14:58.356 [2024-10-09 03:17:41.505388] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1544e40) on tqpair(0x14e0750): expected_datao=0, payload_size=3072 00:14:58.356 [2024-10-09 03:17:41.505393] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.356 [2024-10-09 03:17:41.505400] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:58.356 [2024-10-09 03:17:41.505404] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:58.356 [2024-10-09 03:17:41.505413] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.356 [2024-10-09 03:17:41.505419] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.356 [2024-10-09 03:17:41.505422] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.356 [2024-10-09 03:17:41.505427] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544e40) on tqpair=0x14e0750 00:14:58.356 [2024-10-09 03:17:41.505437] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.356 [2024-10-09 03:17:41.505441] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14e0750) 00:14:58.356 [2024-10-09 03:17:41.505449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.356 [2024-10-09 03:17:41.505472] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544e40, cid 4, qid 0 00:14:58.356 [2024-10-09 03:17:41.505535] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:58.356 [2024-10-09 03:17:41.505542] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:58.356 [2024-10-09 03:17:41.505545] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:58.356 [2024-10-09 03:17:41.505549] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14e0750): datao=0, datal=8, cccid=4 00:14:58.356 [2024-10-09 03:17:41.505554] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1544e40) on tqpair(0x14e0750): expected_datao=0, payload_size=8 00:14:58.356 [2024-10-09 03:17:41.505559] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.356 [2024-10-09 03:17:41.505566] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:58.356 [2024-10-09 03:17:41.505570] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:58.356 ===================================================== 00:14:58.356 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:58.356 ===================================================== 00:14:58.356 Controller Capabilities/Features 00:14:58.356 ================================ 00:14:58.356 Vendor ID: 0000 00:14:58.356 Subsystem Vendor ID: 0000 00:14:58.356 Serial Number: .................... 00:14:58.356 Model Number: ........................................ 00:14:58.356 Firmware Version: 25.01 00:14:58.356 Recommended Arb Burst: 0 00:14:58.356 IEEE OUI Identifier: 00 00 00 00:14:58.356 Multi-path I/O 00:14:58.356 May have multiple subsystem ports: No 00:14:58.356 May have multiple controllers: No 00:14:58.356 Associated with SR-IOV VF: No 00:14:58.356 Max Data Transfer Size: 131072 00:14:58.356 Max Number of Namespaces: 0 00:14:58.356 Max Number of I/O Queues: 1024 00:14:58.356 NVMe Specification Version (VS): 1.3 00:14:58.356 NVMe Specification Version (Identify): 1.3 00:14:58.356 Maximum Queue Entries: 128 00:14:58.356 Contiguous Queues Required: Yes 00:14:58.356 Arbitration Mechanisms Supported 00:14:58.356 Weighted Round Robin: Not Supported 00:14:58.356 Vendor Specific: Not Supported 00:14:58.356 Reset Timeout: 15000 ms 00:14:58.356 Doorbell Stride: 4 bytes 00:14:58.356 NVM Subsystem Reset: Not Supported 00:14:58.356 Command Sets Supported 00:14:58.356 NVM Command Set: Supported 00:14:58.356 Boot Partition: Not Supported 00:14:58.356 Memory Page Size Minimum: 4096 bytes 00:14:58.356 Memory Page Size Maximum: 4096 bytes 00:14:58.356 Persistent Memory Region: Not Supported 00:14:58.356 Optional Asynchronous Events Supported 00:14:58.356 Namespace Attribute Notices: Not Supported 00:14:58.356 Firmware Activation Notices: Not Supported 00:14:58.356 ANA Change Notices: Not Supported 00:14:58.356 PLE Aggregate Log Change Notices: Not Supported 00:14:58.356 LBA Status Info Alert Notices: Not Supported 00:14:58.356 EGE Aggregate Log Change Notices: Not Supported 00:14:58.356 Normal NVM Subsystem Shutdown event: Not Supported 00:14:58.356 Zone Descriptor Change Notices: Not Supported 00:14:58.356 Discovery Log Change Notices: Supported 00:14:58.356 Controller Attributes 00:14:58.356 128-bit Host Identifier: Not Supported 00:14:58.356 Non-Operational Permissive Mode: Not Supported 00:14:58.356 NVM Sets: Not Supported 00:14:58.356 Read Recovery Levels: Not Supported 00:14:58.356 Endurance Groups: Not Supported 00:14:58.356 Predictable Latency Mode: Not Supported 00:14:58.356 Traffic Based Keep ALive: Not Supported 00:14:58.356 Namespace Granularity: Not Supported 00:14:58.356 SQ Associations: Not Supported 00:14:58.356 UUID List: Not Supported 00:14:58.356 Multi-Domain Subsystem: Not Supported 00:14:58.356 Fixed Capacity Management: Not Supported 00:14:58.356 Variable Capacity Management: Not Supported 00:14:58.356 Delete Endurance Group: Not Supported 00:14:58.356 Delete NVM Set: Not Supported 00:14:58.356 Extended LBA Formats Supported: Not Supported 00:14:58.356 Flexible Data Placement Supported: Not Supported 00:14:58.356 00:14:58.356 Controller Memory Buffer Support 00:14:58.356 ================================ 00:14:58.356 Supported: No 00:14:58.356 00:14:58.356 Persistent Memory Region Support 00:14:58.356 ================================ 00:14:58.356 Supported: No 00:14:58.356 00:14:58.356 Admin Command Set Attributes 00:14:58.356 ============================ 00:14:58.356 Security Send/Receive: Not Supported 00:14:58.356 Format NVM: Not Supported 00:14:58.356 Firmware Activate/Download: Not Supported 00:14:58.356 Namespace Management: Not Supported 00:14:58.356 Device Self-Test: Not Supported 00:14:58.356 Directives: Not Supported 00:14:58.356 NVMe-MI: Not Supported 00:14:58.356 Virtualization Management: Not Supported 00:14:58.356 Doorbell Buffer Config: Not Supported 00:14:58.356 Get LBA Status Capability: Not Supported 00:14:58.356 Command & Feature Lockdown Capability: Not Supported 00:14:58.356 Abort Command Limit: 1 00:14:58.356 Async Event Request Limit: 4 00:14:58.356 Number of Firmware Slots: N/A 00:14:58.356 Firmware Slot 1 Read-Only: N/A 00:14:58.356 Firmware Activation Without Reset: N/A 00:14:58.356 Multiple Update Detection Support: N/A 00:14:58.356 Firmware Update Granularity: No Information Provided 00:14:58.356 Per-Namespace SMART Log: No 00:14:58.356 Asymmetric Namespace Access Log Page: Not Supported 00:14:58.356 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:58.356 Command Effects Log Page: Not Supported 00:14:58.356 Get Log Page Extended Data: Supported 00:14:58.356 Telemetry Log Pages: Not Supported 00:14:58.356 Persistent Event Log Pages: Not Supported 00:14:58.356 Supported Log Pages Log Page: May Support 00:14:58.356 Commands Supported & Effects Log Page: Not Supported 00:14:58.356 Feature Identifiers & Effects Log Page:May Support 00:14:58.356 NVMe-MI Commands & Effects Log Page: May Support 00:14:58.356 Data Area 4 for Telemetry Log: Not Supported 00:14:58.356 Error Log Page Entries Supported: 128 00:14:58.356 Keep Alive: Not Supported 00:14:58.356 00:14:58.356 NVM Command Set Attributes 00:14:58.356 ========================== 00:14:58.356 Submission Queue Entry Size 00:14:58.356 Max: 1 00:14:58.356 Min: 1 00:14:58.356 Completion Queue Entry Size 00:14:58.356 Max: 1 00:14:58.356 Min: 1 00:14:58.356 Number of Namespaces: 0 00:14:58.356 Compare Command: Not Supported 00:14:58.356 Write Uncorrectable Command: Not Supported 00:14:58.356 Dataset Management Command: Not Supported 00:14:58.356 Write Zeroes Command: Not Supported 00:14:58.356 Set Features Save Field: Not Supported 00:14:58.356 Reservations: Not Supported 00:14:58.356 Timestamp: Not Supported 00:14:58.356 Copy: Not Supported 00:14:58.356 Volatile Write Cache: Not Present 00:14:58.356 Atomic Write Unit (Normal): 1 00:14:58.356 Atomic Write Unit (PFail): 1 00:14:58.356 Atomic Compare & Write Unit: 1 00:14:58.356 Fused Compare & Write: Supported 00:14:58.356 Scatter-Gather List 00:14:58.356 SGL Command Set: Supported 00:14:58.356 SGL Keyed: Supported 00:14:58.356 SGL Bit Bucket Descriptor: Not Supported 00:14:58.356 SGL Metadata Pointer: Not Supported 00:14:58.356 Oversized SGL: Not Supported 00:14:58.356 SGL Metadata Address: Not Supported 00:14:58.356 SGL Offset: Supported 00:14:58.356 Transport SGL Data Block: Not Supported 00:14:58.356 Replay Protected Memory Block: Not Supported 00:14:58.356 00:14:58.356 Firmware Slot Information 00:14:58.356 ========================= 00:14:58.356 Active slot: 0 00:14:58.356 00:14:58.356 00:14:58.356 Error Log 00:14:58.356 ========= 00:14:58.356 00:14:58.356 Active Namespaces 00:14:58.356 ================= 00:14:58.356 Discovery Log Page 00:14:58.356 ================== 00:14:58.356 Generation Counter: 2 00:14:58.356 Number of Records: 2 00:14:58.356 Record Format: 0 00:14:58.356 00:14:58.356 Discovery Log Entry 0 00:14:58.356 ---------------------- 00:14:58.356 Transport Type: 3 (TCP) 00:14:58.357 Address Family: 1 (IPv4) 00:14:58.357 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:58.357 Entry Flags: 00:14:58.357 Duplicate Returned Information: 1 00:14:58.357 Explicit Persistent Connection Support for Discovery: 1 00:14:58.357 Transport Requirements: 00:14:58.357 Secure Channel: Not Required 00:14:58.357 Port ID: 0 (0x0000) 00:14:58.357 Controller ID: 65535 (0xffff) 00:14:58.357 Admin Max SQ Size: 128 00:14:58.357 Transport Service Identifier: 4420 00:14:58.357 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:58.357 Transport Address: 10.0.0.3 00:14:58.357 Discovery Log Entry 1 00:14:58.357 ---------------------- 00:14:58.357 Transport Type: 3 (TCP) 00:14:58.357 Address Family: 1 (IPv4) 00:14:58.357 Subsystem Type: 2 (NVM Subsystem) 00:14:58.357 Entry Flags: 00:14:58.357 Duplicate Returned Information: 0 00:14:58.357 Explicit Persistent Connection Support for Discovery: 0 00:14:58.357 Transport Requirements: 00:14:58.357 Secure Channel: Not Required 00:14:58.357 Port ID: 0 (0x0000) 00:14:58.357 Controller ID: 65535 (0xffff) 00:14:58.357 Admin Max SQ Size: 128 00:14:58.357 Transport Service Identifier: 4420 00:14:58.357 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:58.357 Transport Address: 10.0.0.3 [2024-10-09 03:17:41.505584] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.357 [2024-10-09 03:17:41.505592] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.357 [2024-10-09 03:17:41.505596] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.357 [2024-10-09 03:17:41.505600] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544e40) on tqpair=0x14e0750 00:14:58.357 [2024-10-09 03:17:41.505694] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:14:58.357 [2024-10-09 03:17:41.505708] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544840) on tqpair=0x14e0750 00:14:58.357 [2024-10-09 03:17:41.505715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.357 [2024-10-09 03:17:41.505721] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15449c0) on tqpair=0x14e0750 00:14:58.357 [2024-10-09 03:17:41.505726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.357 [2024-10-09 03:17:41.505732] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544b40) on tqpair=0x14e0750 00:14:58.357 [2024-10-09 03:17:41.505737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.357 [2024-10-09 03:17:41.505742] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544cc0) on tqpair=0x14e0750 00:14:58.357 [2024-10-09 03:17:41.505747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.357 [2024-10-09 03:17:41.505757] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.357 [2024-10-09 03:17:41.505761] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.357 [2024-10-09 03:17:41.505765] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e0750) 00:14:58.357 [2024-10-09 03:17:41.505773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.357 [2024-10-09 03:17:41.505796] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544cc0, cid 3, qid 0 00:14:58.357 [2024-10-09 03:17:41.505846] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.357 [2024-10-09 03:17:41.505853] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.357 [2024-10-09 03:17:41.505856] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.357 [2024-10-09 03:17:41.505861] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544cc0) on tqpair=0x14e0750 00:14:58.357 [2024-10-09 03:17:41.505869] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.357 [2024-10-09 03:17:41.505873] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.357 [2024-10-09 03:17:41.505877] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e0750) 00:14:58.357 [2024-10-09 03:17:41.505885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.357 [2024-10-09 03:17:41.505907] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544cc0, cid 3, qid 0 00:14:58.357 [2024-10-09 03:17:41.505968] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.357 [2024-10-09 03:17:41.505975] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.357 [2024-10-09 03:17:41.505979] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.357 [2024-10-09 03:17:41.505995] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544cc0) on tqpair=0x14e0750 00:14:58.357 [2024-10-09 03:17:41.506008] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:14:58.357 [2024-10-09 03:17:41.506013] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:14:58.357 [2024-10-09 03:17:41.506024] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.357 [2024-10-09 03:17:41.506029] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.357 [2024-10-09 03:17:41.506033] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e0750) 00:14:58.357 [2024-10-09 03:17:41.506041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.357 [2024-10-09 03:17:41.506074] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544cc0, cid 3, qid 0 00:14:58.357 [2024-10-09 03:17:41.506127] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.357 [2024-10-09 03:17:41.506134] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.357 [2024-10-09 03:17:41.506138] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.357 [2024-10-09 03:17:41.506142] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544cc0) on tqpair=0x14e0750 00:14:58.357 [2024-10-09 03:17:41.506154] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.357 [2024-10-09 03:17:41.506159] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.357 [2024-10-09 03:17:41.506163] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e0750) 00:14:58.357 [2024-10-09 03:17:41.506171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.357 [2024-10-09 03:17:41.506189] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544cc0, cid 3, qid 0 00:14:58.357 [2024-10-09 03:17:41.506237] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.357 [2024-10-09 03:17:41.506244] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.357 [2024-10-09 03:17:41.506248] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.357 [2024-10-09 03:17:41.506252] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544cc0) on tqpair=0x14e0750 00:14:58.357 [2024-10-09 03:17:41.506264] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.357 [2024-10-09 03:17:41.506268] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.357 [2024-10-09 03:17:41.506272] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e0750) 00:14:58.357 [2024-10-09 03:17:41.506279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.357 [2024-10-09 03:17:41.506297] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544cc0, cid 3, qid 0 00:14:58.357 [2024-10-09 03:17:41.506345] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.357 [2024-10-09 03:17:41.506352] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.357 [2024-10-09 03:17:41.506356] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.357 [2024-10-09 03:17:41.506360] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544cc0) on tqpair=0x14e0750 00:14:58.357 [2024-10-09 03:17:41.506370] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.357 [2024-10-09 03:17:41.506375] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.357 [2024-10-09 03:17:41.506379] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e0750) 00:14:58.357 [2024-10-09 03:17:41.506386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.357 [2024-10-09 03:17:41.506404] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544cc0, cid 3, qid 0 00:14:58.357 [2024-10-09 03:17:41.506455] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.357 [2024-10-09 03:17:41.506462] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.357 [2024-10-09 03:17:41.506466] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.357 [2024-10-09 03:17:41.506470] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544cc0) on tqpair=0x14e0750 00:14:58.357 [2024-10-09 03:17:41.506481] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.357 [2024-10-09 03:17:41.506486] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.357 [2024-10-09 03:17:41.506489] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e0750) 00:14:58.357 [2024-10-09 03:17:41.506497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.357 [2024-10-09 03:17:41.506514] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544cc0, cid 3, qid 0 00:14:58.357 [2024-10-09 03:17:41.506562] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.357 [2024-10-09 03:17:41.506569] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.357 [2024-10-09 03:17:41.506573] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.357 [2024-10-09 03:17:41.506577] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544cc0) on tqpair=0x14e0750 00:14:58.357 [2024-10-09 03:17:41.506587] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.357 [2024-10-09 03:17:41.506592] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.357 [2024-10-09 03:17:41.506596] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e0750) 00:14:58.357 [2024-10-09 03:17:41.506603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.357 [2024-10-09 03:17:41.506621] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544cc0, cid 3, qid 0 00:14:58.357 [2024-10-09 03:17:41.506666] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.357 [2024-10-09 03:17:41.506672] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.357 [2024-10-09 03:17:41.506676] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.357 [2024-10-09 03:17:41.506680] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544cc0) on tqpair=0x14e0750 00:14:58.357 [2024-10-09 03:17:41.506691] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.357 [2024-10-09 03:17:41.506696] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.357 [2024-10-09 03:17:41.506699] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e0750) 00:14:58.357 [2024-10-09 03:17:41.506707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.357 [2024-10-09 03:17:41.506725] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544cc0, cid 3, qid 0 00:14:58.358 [2024-10-09 03:17:41.506769] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.358 [2024-10-09 03:17:41.506776] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.358 [2024-10-09 03:17:41.506780] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.358 [2024-10-09 03:17:41.506784] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544cc0) on tqpair=0x14e0750 00:14:58.358 [2024-10-09 03:17:41.506794] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.358 [2024-10-09 03:17:41.506799] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.358 [2024-10-09 03:17:41.506803] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e0750) 00:14:58.358 [2024-10-09 03:17:41.506810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.358 [2024-10-09 03:17:41.506828] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544cc0, cid 3, qid 0 00:14:58.358 [2024-10-09 03:17:41.506874] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.358 [2024-10-09 03:17:41.506880] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.358 [2024-10-09 03:17:41.506884] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.358 [2024-10-09 03:17:41.506888] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544cc0) on tqpair=0x14e0750 00:14:58.358 [2024-10-09 03:17:41.506899] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.358 [2024-10-09 03:17:41.506904] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.358 [2024-10-09 03:17:41.506907] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e0750) 00:14:58.358 [2024-10-09 03:17:41.506915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.358 [2024-10-09 03:17:41.506933] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544cc0, cid 3, qid 0 00:14:58.358 [2024-10-09 03:17:41.506975] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.358 [2024-10-09 03:17:41.506982] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.358 [2024-10-09 03:17:41.506986] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.358 [2024-10-09 03:17:41.506990] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544cc0) on tqpair=0x14e0750 00:14:58.358 [2024-10-09 03:17:41.507001] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.358 [2024-10-09 03:17:41.507005] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.358 [2024-10-09 03:17:41.507009] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e0750) 00:14:58.358 [2024-10-09 03:17:41.507017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.358 [2024-10-09 03:17:41.507035] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544cc0, cid 3, qid 0 00:14:58.358 [2024-10-09 03:17:41.511073] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.358 [2024-10-09 03:17:41.511090] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.358 [2024-10-09 03:17:41.511095] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.358 [2024-10-09 03:17:41.511100] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544cc0) on tqpair=0x14e0750 00:14:58.358 [2024-10-09 03:17:41.511114] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.358 [2024-10-09 03:17:41.511119] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.358 [2024-10-09 03:17:41.511123] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e0750) 00:14:58.358 [2024-10-09 03:17:41.511132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.358 [2024-10-09 03:17:41.511158] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1544cc0, cid 3, qid 0 00:14:58.358 [2024-10-09 03:17:41.511208] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.358 [2024-10-09 03:17:41.511215] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.358 [2024-10-09 03:17:41.511219] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.358 [2024-10-09 03:17:41.511223] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1544cc0) on tqpair=0x14e0750 00:14:58.358 [2024-10-09 03:17:41.511231] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:14:58.358 00:14:58.358 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:58.358 [2024-10-09 03:17:41.550022] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:14:58.358 [2024-10-09 03:17:41.550086] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74102 ] 00:14:58.622 [2024-10-09 03:17:41.689370] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:14:58.622 [2024-10-09 03:17:41.689460] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:58.622 [2024-10-09 03:17:41.689466] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:58.622 [2024-10-09 03:17:41.689476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:58.622 [2024-10-09 03:17:41.689484] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:58.622 [2024-10-09 03:17:41.689768] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:14:58.622 [2024-10-09 03:17:41.689829] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x17ad750 0 00:14:58.622 [2024-10-09 03:17:41.703093] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:58.622 [2024-10-09 03:17:41.703118] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:58.622 [2024-10-09 03:17:41.703140] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:58.622 [2024-10-09 03:17:41.703144] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:58.622 [2024-10-09 03:17:41.703179] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.622 [2024-10-09 03:17:41.703186] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.622 [2024-10-09 03:17:41.703190] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17ad750) 00:14:58.622 [2024-10-09 03:17:41.703202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:58.622 [2024-10-09 03:17:41.703233] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811840, cid 0, qid 0 00:14:58.622 [2024-10-09 03:17:41.711093] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.622 [2024-10-09 03:17:41.711114] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.622 [2024-10-09 03:17:41.711119] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.622 [2024-10-09 03:17:41.711140] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811840) on tqpair=0x17ad750 00:14:58.622 [2024-10-09 03:17:41.711149] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:58.622 [2024-10-09 03:17:41.711157] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:14:58.622 [2024-10-09 03:17:41.711163] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:14:58.622 [2024-10-09 03:17:41.711177] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.622 [2024-10-09 03:17:41.711183] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.622 [2024-10-09 03:17:41.711187] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17ad750) 00:14:58.622 [2024-10-09 03:17:41.711196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.622 [2024-10-09 03:17:41.711222] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811840, cid 0, qid 0 00:14:58.622 [2024-10-09 03:17:41.711274] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.622 [2024-10-09 03:17:41.711281] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.622 [2024-10-09 03:17:41.711284] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.622 [2024-10-09 03:17:41.711288] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811840) on tqpair=0x17ad750 00:14:58.622 [2024-10-09 03:17:41.711294] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:14:58.622 [2024-10-09 03:17:41.711301] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:14:58.622 [2024-10-09 03:17:41.711309] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.622 [2024-10-09 03:17:41.711313] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.622 [2024-10-09 03:17:41.711317] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17ad750) 00:14:58.622 [2024-10-09 03:17:41.711324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.622 [2024-10-09 03:17:41.711377] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811840, cid 0, qid 0 00:14:58.622 [2024-10-09 03:17:41.711423] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.622 [2024-10-09 03:17:41.711431] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.622 [2024-10-09 03:17:41.711434] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.622 [2024-10-09 03:17:41.711439] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811840) on tqpair=0x17ad750 00:14:58.622 [2024-10-09 03:17:41.711444] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:14:58.622 [2024-10-09 03:17:41.711453] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:14:58.622 [2024-10-09 03:17:41.711461] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.622 [2024-10-09 03:17:41.711465] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.622 [2024-10-09 03:17:41.711469] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17ad750) 00:14:58.622 [2024-10-09 03:17:41.711477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.622 [2024-10-09 03:17:41.711495] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811840, cid 0, qid 0 00:14:58.622 [2024-10-09 03:17:41.711543] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.622 [2024-10-09 03:17:41.711550] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.622 [2024-10-09 03:17:41.711555] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.622 [2024-10-09 03:17:41.711559] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811840) on tqpair=0x17ad750 00:14:58.622 [2024-10-09 03:17:41.711565] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:58.622 [2024-10-09 03:17:41.711576] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.622 [2024-10-09 03:17:41.711581] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.622 [2024-10-09 03:17:41.711585] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17ad750) 00:14:58.622 [2024-10-09 03:17:41.711603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.622 [2024-10-09 03:17:41.711622] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811840, cid 0, qid 0 00:14:58.622 [2024-10-09 03:17:41.711666] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.622 [2024-10-09 03:17:41.711673] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.622 [2024-10-09 03:17:41.711677] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.622 [2024-10-09 03:17:41.711681] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811840) on tqpair=0x17ad750 00:14:58.623 [2024-10-09 03:17:41.711686] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:14:58.623 [2024-10-09 03:17:41.711692] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:14:58.623 [2024-10-09 03:17:41.711700] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:58.623 [2024-10-09 03:17:41.711805] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:14:58.623 [2024-10-09 03:17:41.711810] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:58.623 [2024-10-09 03:17:41.711819] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.623 [2024-10-09 03:17:41.711823] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.623 [2024-10-09 03:17:41.711828] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17ad750) 00:14:58.623 [2024-10-09 03:17:41.711835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.623 [2024-10-09 03:17:41.711854] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811840, cid 0, qid 0 00:14:58.623 [2024-10-09 03:17:41.711905] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.623 [2024-10-09 03:17:41.711912] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.623 [2024-10-09 03:17:41.711916] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.623 [2024-10-09 03:17:41.711920] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811840) on tqpair=0x17ad750 00:14:58.623 [2024-10-09 03:17:41.711925] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:58.623 [2024-10-09 03:17:41.711936] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.623 [2024-10-09 03:17:41.711941] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.623 [2024-10-09 03:17:41.711945] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17ad750) 00:14:58.623 [2024-10-09 03:17:41.711952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.623 [2024-10-09 03:17:41.711970] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811840, cid 0, qid 0 00:14:58.623 [2024-10-09 03:17:41.712013] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.623 [2024-10-09 03:17:41.712020] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.623 [2024-10-09 03:17:41.712024] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.623 [2024-10-09 03:17:41.712028] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811840) on tqpair=0x17ad750 00:14:58.623 [2024-10-09 03:17:41.712033] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:58.623 [2024-10-09 03:17:41.712038] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:14:58.623 [2024-10-09 03:17:41.712047] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:14:58.623 [2024-10-09 03:17:41.712061] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:14:58.623 [2024-10-09 03:17:41.712072] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.623 [2024-10-09 03:17:41.712077] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17ad750) 00:14:58.623 [2024-10-09 03:17:41.712085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.623 [2024-10-09 03:17:41.712116] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811840, cid 0, qid 0 00:14:58.623 [2024-10-09 03:17:41.712217] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:58.623 [2024-10-09 03:17:41.712225] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:58.623 [2024-10-09 03:17:41.712229] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:58.623 [2024-10-09 03:17:41.712233] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17ad750): datao=0, datal=4096, cccid=0 00:14:58.623 [2024-10-09 03:17:41.712238] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1811840) on tqpair(0x17ad750): expected_datao=0, payload_size=4096 00:14:58.623 [2024-10-09 03:17:41.712242] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.623 [2024-10-09 03:17:41.712250] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:58.623 [2024-10-09 03:17:41.712255] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:58.623 [2024-10-09 03:17:41.712263] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.623 [2024-10-09 03:17:41.712269] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.623 [2024-10-09 03:17:41.712273] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.623 [2024-10-09 03:17:41.712277] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811840) on tqpair=0x17ad750 00:14:58.623 [2024-10-09 03:17:41.712286] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:14:58.623 [2024-10-09 03:17:41.712292] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:14:58.623 [2024-10-09 03:17:41.712297] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:14:58.623 [2024-10-09 03:17:41.712303] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:14:58.623 [2024-10-09 03:17:41.712308] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:14:58.623 [2024-10-09 03:17:41.712313] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:14:58.623 [2024-10-09 03:17:41.712322] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:14:58.623 [2024-10-09 03:17:41.712335] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.623 [2024-10-09 03:17:41.712340] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.623 [2024-10-09 03:17:41.712344] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17ad750) 00:14:58.623 [2024-10-09 03:17:41.712352] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:58.623 [2024-10-09 03:17:41.712373] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811840, cid 0, qid 0 00:14:58.623 [2024-10-09 03:17:41.712421] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.623 [2024-10-09 03:17:41.712429] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.623 [2024-10-09 03:17:41.712432] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.623 [2024-10-09 03:17:41.712437] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811840) on tqpair=0x17ad750 00:14:58.623 [2024-10-09 03:17:41.712444] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.623 [2024-10-09 03:17:41.712449] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.623 [2024-10-09 03:17:41.712453] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17ad750) 00:14:58.623 [2024-10-09 03:17:41.712459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.623 [2024-10-09 03:17:41.712466] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.623 [2024-10-09 03:17:41.712470] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.623 [2024-10-09 03:17:41.712474] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x17ad750) 00:14:58.623 [2024-10-09 03:17:41.712480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.623 [2024-10-09 03:17:41.712487] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.623 [2024-10-09 03:17:41.712491] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.623 [2024-10-09 03:17:41.712495] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x17ad750) 00:14:58.623 [2024-10-09 03:17:41.712501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.623 [2024-10-09 03:17:41.712507] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.623 [2024-10-09 03:17:41.712511] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.623 [2024-10-09 03:17:41.712515] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.623 [2024-10-09 03:17:41.712521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.623 [2024-10-09 03:17:41.712526] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:58.623 [2024-10-09 03:17:41.712539] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:58.623 [2024-10-09 03:17:41.712547] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.623 [2024-10-09 03:17:41.712551] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17ad750) 00:14:58.623 [2024-10-09 03:17:41.712558] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.623 [2024-10-09 03:17:41.712579] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811840, cid 0, qid 0 00:14:58.623 [2024-10-09 03:17:41.712586] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18119c0, cid 1, qid 0 00:14:58.623 [2024-10-09 03:17:41.712591] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811b40, cid 2, qid 0 00:14:58.623 [2024-10-09 03:17:41.712596] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.623 [2024-10-09 03:17:41.712601] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811e40, cid 4, qid 0 00:14:58.623 [2024-10-09 03:17:41.712684] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.623 [2024-10-09 03:17:41.712691] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.623 [2024-10-09 03:17:41.712695] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.623 [2024-10-09 03:17:41.712699] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811e40) on tqpair=0x17ad750 00:14:58.623 [2024-10-09 03:17:41.712705] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:14:58.623 [2024-10-09 03:17:41.712711] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:58.623 [2024-10-09 03:17:41.712723] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:14:58.623 [2024-10-09 03:17:41.712730] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:58.623 [2024-10-09 03:17:41.712737] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.624 [2024-10-09 03:17:41.712742] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.624 [2024-10-09 03:17:41.712746] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17ad750) 00:14:58.624 [2024-10-09 03:17:41.712753] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:58.624 [2024-10-09 03:17:41.712772] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811e40, cid 4, qid 0 00:14:58.624 [2024-10-09 03:17:41.712823] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.624 [2024-10-09 03:17:41.712830] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.624 [2024-10-09 03:17:41.712834] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.624 [2024-10-09 03:17:41.712838] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811e40) on tqpair=0x17ad750 00:14:58.624 [2024-10-09 03:17:41.712904] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:14:58.624 [2024-10-09 03:17:41.712916] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:58.624 [2024-10-09 03:17:41.712925] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.624 [2024-10-09 03:17:41.712929] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17ad750) 00:14:58.624 [2024-10-09 03:17:41.712937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.624 [2024-10-09 03:17:41.712957] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811e40, cid 4, qid 0 00:14:58.624 [2024-10-09 03:17:41.713015] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:58.624 [2024-10-09 03:17:41.713023] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:58.624 [2024-10-09 03:17:41.713027] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:58.624 [2024-10-09 03:17:41.713030] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17ad750): datao=0, datal=4096, cccid=4 00:14:58.624 [2024-10-09 03:17:41.713035] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1811e40) on tqpair(0x17ad750): expected_datao=0, payload_size=4096 00:14:58.624 [2024-10-09 03:17:41.713040] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.624 [2024-10-09 03:17:41.713060] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:58.624 [2024-10-09 03:17:41.713065] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:58.624 [2024-10-09 03:17:41.713074] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.624 [2024-10-09 03:17:41.713080] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.624 [2024-10-09 03:17:41.713084] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.624 [2024-10-09 03:17:41.713088] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811e40) on tqpair=0x17ad750 00:14:58.624 [2024-10-09 03:17:41.713107] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:14:58.624 [2024-10-09 03:17:41.713118] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:14:58.624 [2024-10-09 03:17:41.713129] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:14:58.624 [2024-10-09 03:17:41.713137] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.624 [2024-10-09 03:17:41.713142] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17ad750) 00:14:58.624 [2024-10-09 03:17:41.713150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.624 [2024-10-09 03:17:41.713171] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811e40, cid 4, qid 0 00:14:58.624 [2024-10-09 03:17:41.713247] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:58.624 [2024-10-09 03:17:41.713254] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:58.624 [2024-10-09 03:17:41.713258] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:58.624 [2024-10-09 03:17:41.713262] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17ad750): datao=0, datal=4096, cccid=4 00:14:58.624 [2024-10-09 03:17:41.713267] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1811e40) on tqpair(0x17ad750): expected_datao=0, payload_size=4096 00:14:58.624 [2024-10-09 03:17:41.713271] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.624 [2024-10-09 03:17:41.713278] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:58.624 [2024-10-09 03:17:41.713282] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:58.624 [2024-10-09 03:17:41.713291] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.624 [2024-10-09 03:17:41.713297] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.624 [2024-10-09 03:17:41.713301] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.624 [2024-10-09 03:17:41.713305] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811e40) on tqpair=0x17ad750 00:14:58.624 [2024-10-09 03:17:41.713316] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:58.624 [2024-10-09 03:17:41.713327] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:58.624 [2024-10-09 03:17:41.713335] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.624 [2024-10-09 03:17:41.713340] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17ad750) 00:14:58.624 [2024-10-09 03:17:41.713347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.624 [2024-10-09 03:17:41.713367] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811e40, cid 4, qid 0 00:14:58.624 [2024-10-09 03:17:41.713427] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:58.624 [2024-10-09 03:17:41.713434] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:58.624 [2024-10-09 03:17:41.713438] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:58.624 [2024-10-09 03:17:41.713442] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17ad750): datao=0, datal=4096, cccid=4 00:14:58.624 [2024-10-09 03:17:41.713447] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1811e40) on tqpair(0x17ad750): expected_datao=0, payload_size=4096 00:14:58.624 [2024-10-09 03:17:41.713451] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.624 [2024-10-09 03:17:41.713459] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:58.624 [2024-10-09 03:17:41.713463] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:58.624 [2024-10-09 03:17:41.713471] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.624 [2024-10-09 03:17:41.713477] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.624 [2024-10-09 03:17:41.713481] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.624 [2024-10-09 03:17:41.713485] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811e40) on tqpair=0x17ad750 00:14:58.624 [2024-10-09 03:17:41.713499] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:58.624 [2024-10-09 03:17:41.713509] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:14:58.624 [2024-10-09 03:17:41.713518] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:14:58.624 [2024-10-09 03:17:41.713525] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:58.624 [2024-10-09 03:17:41.713530] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:58.624 [2024-10-09 03:17:41.713536] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:14:58.624 [2024-10-09 03:17:41.713542] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:14:58.624 [2024-10-09 03:17:41.713547] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:14:58.624 [2024-10-09 03:17:41.713552] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:14:58.624 [2024-10-09 03:17:41.713567] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.624 [2024-10-09 03:17:41.713572] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17ad750) 00:14:58.624 [2024-10-09 03:17:41.713580] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.624 [2024-10-09 03:17:41.713587] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.624 [2024-10-09 03:17:41.713592] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.624 [2024-10-09 03:17:41.713595] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17ad750) 00:14:58.624 [2024-10-09 03:17:41.713602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.624 [2024-10-09 03:17:41.713630] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811e40, cid 4, qid 0 00:14:58.624 [2024-10-09 03:17:41.713637] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811fc0, cid 5, qid 0 00:14:58.624 [2024-10-09 03:17:41.713697] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.624 [2024-10-09 03:17:41.713704] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.624 [2024-10-09 03:17:41.713708] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.624 [2024-10-09 03:17:41.713712] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811e40) on tqpair=0x17ad750 00:14:58.624 [2024-10-09 03:17:41.713719] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.624 [2024-10-09 03:17:41.713725] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.624 [2024-10-09 03:17:41.713729] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.624 [2024-10-09 03:17:41.713733] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811fc0) on tqpair=0x17ad750 00:14:58.624 [2024-10-09 03:17:41.713744] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.624 [2024-10-09 03:17:41.713749] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17ad750) 00:14:58.624 [2024-10-09 03:17:41.713756] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.624 [2024-10-09 03:17:41.713775] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811fc0, cid 5, qid 0 00:14:58.624 [2024-10-09 03:17:41.713821] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.624 [2024-10-09 03:17:41.713834] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.624 [2024-10-09 03:17:41.713838] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.624 [2024-10-09 03:17:41.713843] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811fc0) on tqpair=0x17ad750 00:14:58.624 [2024-10-09 03:17:41.713854] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.624 [2024-10-09 03:17:41.713858] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17ad750) 00:14:58.624 [2024-10-09 03:17:41.713866] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.624 [2024-10-09 03:17:41.713884] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811fc0, cid 5, qid 0 00:14:58.625 [2024-10-09 03:17:41.713934] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.625 [2024-10-09 03:17:41.713941] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.625 [2024-10-09 03:17:41.713945] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.625 [2024-10-09 03:17:41.713949] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811fc0) on tqpair=0x17ad750 00:14:58.625 [2024-10-09 03:17:41.713959] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.625 [2024-10-09 03:17:41.713965] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17ad750) 00:14:58.625 [2024-10-09 03:17:41.713972] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.625 [2024-10-09 03:17:41.714001] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811fc0, cid 5, qid 0 00:14:58.625 [2024-10-09 03:17:41.714074] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.625 [2024-10-09 03:17:41.714083] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.625 [2024-10-09 03:17:41.714087] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.625 [2024-10-09 03:17:41.714091] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811fc0) on tqpair=0x17ad750 00:14:58.625 [2024-10-09 03:17:41.714110] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.625 [2024-10-09 03:17:41.714116] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17ad750) 00:14:58.625 [2024-10-09 03:17:41.714124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.625 [2024-10-09 03:17:41.714132] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.625 [2024-10-09 03:17:41.714136] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17ad750) 00:14:58.625 [2024-10-09 03:17:41.714143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.625 [2024-10-09 03:17:41.714150] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.625 [2024-10-09 03:17:41.714155] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x17ad750) 00:14:58.625 [2024-10-09 03:17:41.714161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.625 [2024-10-09 03:17:41.714169] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.625 [2024-10-09 03:17:41.714173] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x17ad750) 00:14:58.625 [2024-10-09 03:17:41.714180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.625 [2024-10-09 03:17:41.714200] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811fc0, cid 5, qid 0 00:14:58.625 [2024-10-09 03:17:41.714207] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811e40, cid 4, qid 0 00:14:58.625 [2024-10-09 03:17:41.714212] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1812140, cid 6, qid 0 00:14:58.625 [2024-10-09 03:17:41.714217] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18122c0, cid 7, qid 0 00:14:58.625 [2024-10-09 03:17:41.714364] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:58.625 [2024-10-09 03:17:41.714372] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:58.625 [2024-10-09 03:17:41.714376] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:58.625 [2024-10-09 03:17:41.714380] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17ad750): datao=0, datal=8192, cccid=5 00:14:58.625 [2024-10-09 03:17:41.714384] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1811fc0) on tqpair(0x17ad750): expected_datao=0, payload_size=8192 00:14:58.625 [2024-10-09 03:17:41.714389] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.625 [2024-10-09 03:17:41.714407] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:58.625 [2024-10-09 03:17:41.714412] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:58.625 [2024-10-09 03:17:41.714418] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:58.625 [2024-10-09 03:17:41.714424] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:58.625 [2024-10-09 03:17:41.714428] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:58.625 [2024-10-09 03:17:41.714432] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17ad750): datao=0, datal=512, cccid=4 00:14:58.625 [2024-10-09 03:17:41.714436] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1811e40) on tqpair(0x17ad750): expected_datao=0, payload_size=512 00:14:58.625 [2024-10-09 03:17:41.714441] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.625 [2024-10-09 03:17:41.714448] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:58.625 [2024-10-09 03:17:41.714452] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:58.625 [2024-10-09 03:17:41.714457] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:58.625 [2024-10-09 03:17:41.714463] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:58.625 [2024-10-09 03:17:41.714467] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:58.625 [2024-10-09 03:17:41.714471] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17ad750): datao=0, datal=512, cccid=6 00:14:58.625 [2024-10-09 03:17:41.714476] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1812140) on tqpair(0x17ad750): expected_datao=0, payload_size=512 00:14:58.625 [2024-10-09 03:17:41.714480] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.625 [2024-10-09 03:17:41.714487] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:58.625 [2024-10-09 03:17:41.714490] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:58.625 [2024-10-09 03:17:41.714496] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:58.625 [2024-10-09 03:17:41.714502] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:58.625 [2024-10-09 03:17:41.714506] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:58.625 [2024-10-09 03:17:41.714510] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17ad750): datao=0, datal=4096, cccid=7 00:14:58.625 [2024-10-09 03:17:41.714514] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18122c0) on tqpair(0x17ad750): expected_datao=0, payload_size=4096 00:14:58.625 [2024-10-09 03:17:41.714519] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.625 [2024-10-09 03:17:41.714526] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:58.625 [2024-10-09 03:17:41.714530] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:58.625 [2024-10-09 03:17:41.714538] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.625 [2024-10-09 03:17:41.714544] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.625 [2024-10-09 03:17:41.714548] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.625 [2024-10-09 03:17:41.714552] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811fc0) on tqpair=0x17ad750 00:14:58.625 [2024-10-09 03:17:41.714567] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.625 [2024-10-09 03:17:41.714574] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.625 [2024-10-09 03:17:41.714578] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.625 [2024-10-09 03:17:41.714582] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811e40) on tqpair=0x17ad750 00:14:58.625 [2024-10-09 03:17:41.714594] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.625 [2024-10-09 03:17:41.714601] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.625 [2024-10-09 03:17:41.714605] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.625 [2024-10-09 03:17:41.714609] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1812140) on tqpair=0x17ad750 00:14:58.625 ===================================================== 00:14:58.625 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:58.625 ===================================================== 00:14:58.625 Controller Capabilities/Features 00:14:58.625 ================================ 00:14:58.625 Vendor ID: 8086 00:14:58.625 Subsystem Vendor ID: 8086 00:14:58.625 Serial Number: SPDK00000000000001 00:14:58.625 Model Number: SPDK bdev Controller 00:14:58.625 Firmware Version: 25.01 00:14:58.625 Recommended Arb Burst: 6 00:14:58.625 IEEE OUI Identifier: e4 d2 5c 00:14:58.625 Multi-path I/O 00:14:58.625 May have multiple subsystem ports: Yes 00:14:58.625 May have multiple controllers: Yes 00:14:58.625 Associated with SR-IOV VF: No 00:14:58.625 Max Data Transfer Size: 131072 00:14:58.625 Max Number of Namespaces: 32 00:14:58.625 Max Number of I/O Queues: 127 00:14:58.625 NVMe Specification Version (VS): 1.3 00:14:58.625 NVMe Specification Version (Identify): 1.3 00:14:58.625 Maximum Queue Entries: 128 00:14:58.625 Contiguous Queues Required: Yes 00:14:58.625 Arbitration Mechanisms Supported 00:14:58.625 Weighted Round Robin: Not Supported 00:14:58.625 Vendor Specific: Not Supported 00:14:58.625 Reset Timeout: 15000 ms 00:14:58.625 Doorbell Stride: 4 bytes 00:14:58.625 NVM Subsystem Reset: Not Supported 00:14:58.625 Command Sets Supported 00:14:58.625 NVM Command Set: Supported 00:14:58.625 Boot Partition: Not Supported 00:14:58.625 Memory Page Size Minimum: 4096 bytes 00:14:58.625 Memory Page Size Maximum: 4096 bytes 00:14:58.625 Persistent Memory Region: Not Supported 00:14:58.625 Optional Asynchronous Events Supported 00:14:58.625 Namespace Attribute Notices: Supported 00:14:58.625 Firmware Activation Notices: Not Supported 00:14:58.625 ANA Change Notices: Not Supported 00:14:58.625 PLE Aggregate Log Change Notices: Not Supported 00:14:58.625 LBA Status Info Alert Notices: Not Supported 00:14:58.625 EGE Aggregate Log Change Notices: Not Supported 00:14:58.625 Normal NVM Subsystem Shutdown event: Not Supported 00:14:58.625 Zone Descriptor Change Notices: Not Supported 00:14:58.625 Discovery Log Change Notices: Not Supported 00:14:58.625 Controller Attributes 00:14:58.625 128-bit Host Identifier: Supported 00:14:58.625 Non-Operational Permissive Mode: Not Supported 00:14:58.625 NVM Sets: Not Supported 00:14:58.625 Read Recovery Levels: Not Supported 00:14:58.625 Endurance Groups: Not Supported 00:14:58.625 Predictable Latency Mode: Not Supported 00:14:58.625 Traffic Based Keep ALive: Not Supported 00:14:58.625 Namespace Granularity: Not Supported 00:14:58.625 SQ Associations: Not Supported 00:14:58.625 UUID List: Not Supported 00:14:58.625 Multi-Domain Subsystem: Not Supported 00:14:58.625 Fixed Capacity Management: Not Supported 00:14:58.625 Variable Capacity Management: Not Supported 00:14:58.625 Delete Endurance Group: Not Supported 00:14:58.625 Delete NVM Set: Not Supported 00:14:58.625 Extended LBA Formats Supported: Not Supported 00:14:58.625 Flexible Data Placement Supported: Not Supported 00:14:58.625 00:14:58.625 Controller Memory Buffer Support 00:14:58.625 ================================ 00:14:58.625 Supported: No 00:14:58.625 00:14:58.625 Persistent Memory Region Support 00:14:58.625 ================================ 00:14:58.626 Supported: No 00:14:58.626 00:14:58.626 Admin Command Set Attributes 00:14:58.626 ============================ 00:14:58.626 Security Send/Receive: Not Supported 00:14:58.626 Format NVM: Not Supported 00:14:58.626 Firmware Activate/Download: Not Supported 00:14:58.626 Namespace Management: Not Supported 00:14:58.626 Device Self-Test: Not Supported 00:14:58.626 Directives: Not Supported 00:14:58.626 NVMe-MI: Not Supported 00:14:58.626 Virtualization Management: Not Supported 00:14:58.626 Doorbell Buffer Config: Not Supported 00:14:58.626 Get LBA Status Capability: Not Supported 00:14:58.626 Command & Feature Lockdown Capability: Not Supported 00:14:58.626 Abort Command Limit: 4 00:14:58.626 Async Event Request Limit: 4 00:14:58.626 Number of Firmware Slots: N/A 00:14:58.626 Firmware Slot 1 Read-Only: N/A 00:14:58.626 Firmware Activation Without Reset: N/A 00:14:58.626 Multiple Update Detection Support: N/A 00:14:58.626 Firmware Update Granularity: No Information Provided 00:14:58.626 Per-Namespace SMART Log: No 00:14:58.626 Asymmetric Namespace Access Log Page: Not Supported 00:14:58.626 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:58.626 Command Effects Log Page: Supported 00:14:58.626 Get Log Page Extended Data: Supported 00:14:58.626 Telemetry Log Pages: Not Supported 00:14:58.626 Persistent Event Log Pages: Not Supported 00:14:58.626 Supported Log Pages Log Page: May Support 00:14:58.626 Commands Supported & Effects Log Page: Not Supported 00:14:58.626 Feature Identifiers & Effects Log Page:May Support 00:14:58.626 NVMe-MI Commands & Effects Log Page: May Support 00:14:58.626 Data Area 4 for Telemetry Log: Not Supported 00:14:58.626 Error Log Page Entries Supported: 128 00:14:58.626 Keep Alive: Supported 00:14:58.626 Keep Alive Granularity: 10000 ms 00:14:58.626 00:14:58.626 NVM Command Set Attributes 00:14:58.626 ========================== 00:14:58.626 Submission Queue Entry Size 00:14:58.626 Max: 64 00:14:58.626 Min: 64 00:14:58.626 Completion Queue Entry Size 00:14:58.626 Max: 16 00:14:58.626 Min: 16 00:14:58.626 Number of Namespaces: 32 00:14:58.626 Compare Command: Supported 00:14:58.626 Write Uncorrectable Command: Not Supported 00:14:58.626 Dataset Management Command: Supported 00:14:58.626 Write Zeroes Command: Supported 00:14:58.626 Set Features Save Field: Not Supported 00:14:58.626 Reservations: Supported 00:14:58.626 Timestamp: Not Supported 00:14:58.626 Copy: Supported 00:14:58.626 Volatile Write Cache: Present 00:14:58.626 Atomic Write Unit (Normal): 1 00:14:58.626 Atomic Write Unit (PFail): 1 00:14:58.626 Atomic Compare & Write Unit: 1 00:14:58.626 Fused Compare & Write: Supported 00:14:58.626 Scatter-Gather List 00:14:58.626 SGL Command Set: Supported 00:14:58.626 SGL Keyed: Supported 00:14:58.626 SGL Bit Bucket Descriptor: Not Supported 00:14:58.626 SGL Metadata Pointer: Not Supported 00:14:58.626 Oversized SGL: Not Supported 00:14:58.626 SGL Metadata Address: Not Supported 00:14:58.626 SGL Offset: Supported 00:14:58.626 Transport SGL Data Block: Not Supported 00:14:58.626 Replay Protected Memory Block: Not Supported 00:14:58.626 00:14:58.626 Firmware Slot Information 00:14:58.626 ========================= 00:14:58.626 Active slot: 1 00:14:58.626 Slot 1 Firmware Revision: 25.01 00:14:58.626 00:14:58.626 00:14:58.626 Commands Supported and Effects 00:14:58.626 ============================== 00:14:58.626 Admin Commands 00:14:58.626 -------------- 00:14:58.626 Get Log Page (02h): Supported 00:14:58.626 Identify (06h): Supported 00:14:58.626 Abort (08h): Supported 00:14:58.626 Set Features (09h): Supported 00:14:58.626 Get Features (0Ah): Supported 00:14:58.626 Asynchronous Event Request (0Ch): Supported 00:14:58.626 Keep Alive (18h): Supported 00:14:58.626 I/O Commands 00:14:58.626 ------------ 00:14:58.626 Flush (00h): Supported LBA-Change 00:14:58.626 Write (01h): Supported LBA-Change 00:14:58.626 Read (02h): Supported 00:14:58.626 Compare (05h): Supported 00:14:58.626 Write Zeroes (08h): Supported LBA-Change 00:14:58.626 Dataset Management (09h): Supported LBA-Change 00:14:58.626 Copy (19h): Supported LBA-Change 00:14:58.626 00:14:58.626 Error Log 00:14:58.626 ========= 00:14:58.626 00:14:58.626 Arbitration 00:14:58.626 =========== 00:14:58.626 Arbitration Burst: 1 00:14:58.626 00:14:58.626 Power Management 00:14:58.626 ================ 00:14:58.626 Number of Power States: 1 00:14:58.626 Current Power State: Power State #0 00:14:58.626 Power State #0: 00:14:58.626 Max Power: 0.00 W 00:14:58.626 Non-Operational State: Operational 00:14:58.626 Entry Latency: Not Reported 00:14:58.626 Exit Latency: Not Reported 00:14:58.626 Relative Read Throughput: 0 00:14:58.626 Relative Read Latency: 0 00:14:58.626 Relative Write Throughput: 0 00:14:58.626 Relative Write Latency: 0 00:14:58.626 Idle Power: Not Reported 00:14:58.626 Active Power: Not Reported 00:14:58.626 Non-Operational Permissive Mode: Not Supported 00:14:58.626 00:14:58.626 Health Information 00:14:58.626 ================== 00:14:58.626 Critical Warnings: 00:14:58.626 Available Spare Space: OK 00:14:58.626 Temperature: OK 00:14:58.626 Device Reliability: OK 00:14:58.626 Read Only: No 00:14:58.626 Volatile Memory Backup: OK 00:14:58.626 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:58.626 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:58.626 Available Spare: 0% 00:14:58.626 Available Spare Threshold: 0% 00:14:58.626 Life Percentage Used:[2024-10-09 03:17:41.714616] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.626 [2024-10-09 03:17:41.714623] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.626 [2024-10-09 03:17:41.714626] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.626 [2024-10-09 03:17:41.714630] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18122c0) on tqpair=0x17ad750 00:14:58.626 [2024-10-09 03:17:41.714733] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.626 [2024-10-09 03:17:41.714741] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x17ad750) 00:14:58.626 [2024-10-09 03:17:41.714749] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.626 [2024-10-09 03:17:41.714772] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18122c0, cid 7, qid 0 00:14:58.626 [2024-10-09 03:17:41.714821] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.626 [2024-10-09 03:17:41.714829] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.626 [2024-10-09 03:17:41.714832] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.626 [2024-10-09 03:17:41.714837] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18122c0) on tqpair=0x17ad750 00:14:58.626 [2024-10-09 03:17:41.714875] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:14:58.626 [2024-10-09 03:17:41.714886] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811840) on tqpair=0x17ad750 00:14:58.626 [2024-10-09 03:17:41.714893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.626 [2024-10-09 03:17:41.714899] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18119c0) on tqpair=0x17ad750 00:14:58.626 [2024-10-09 03:17:41.714904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.626 [2024-10-09 03:17:41.714909] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811b40) on tqpair=0x17ad750 00:14:58.626 [2024-10-09 03:17:41.714914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.626 [2024-10-09 03:17:41.714920] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.626 [2024-10-09 03:17:41.714925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.626 [2024-10-09 03:17:41.714934] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.626 [2024-10-09 03:17:41.714939] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.626 [2024-10-09 03:17:41.714943] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.626 [2024-10-09 03:17:41.714950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.626 [2024-10-09 03:17:41.714972] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.626 [2024-10-09 03:17:41.715020] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.627 [2024-10-09 03:17:41.715027] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.627 [2024-10-09 03:17:41.715031] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.715036] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.627 [2024-10-09 03:17:41.715044] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.719110] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.719117] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.627 [2024-10-09 03:17:41.719126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.627 [2024-10-09 03:17:41.719157] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.627 [2024-10-09 03:17:41.719230] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.627 [2024-10-09 03:17:41.719237] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.627 [2024-10-09 03:17:41.719241] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.719245] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.627 [2024-10-09 03:17:41.719251] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:14:58.627 [2024-10-09 03:17:41.719262] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:14:58.627 [2024-10-09 03:17:41.719273] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.719278] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.719282] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.627 [2024-10-09 03:17:41.719289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.627 [2024-10-09 03:17:41.719324] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.627 [2024-10-09 03:17:41.719371] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.627 [2024-10-09 03:17:41.719378] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.627 [2024-10-09 03:17:41.719382] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.719386] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.627 [2024-10-09 03:17:41.719397] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.719402] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.719406] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.627 [2024-10-09 03:17:41.719414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.627 [2024-10-09 03:17:41.719431] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.627 [2024-10-09 03:17:41.719473] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.627 [2024-10-09 03:17:41.719480] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.627 [2024-10-09 03:17:41.719484] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.719488] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.627 [2024-10-09 03:17:41.719498] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.719503] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.719507] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.627 [2024-10-09 03:17:41.719515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.627 [2024-10-09 03:17:41.719532] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.627 [2024-10-09 03:17:41.719579] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.627 [2024-10-09 03:17:41.719586] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.627 [2024-10-09 03:17:41.719590] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.719594] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.627 [2024-10-09 03:17:41.719604] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.719609] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.719613] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.627 [2024-10-09 03:17:41.719621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.627 [2024-10-09 03:17:41.719638] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.627 [2024-10-09 03:17:41.719681] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.627 [2024-10-09 03:17:41.719689] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.627 [2024-10-09 03:17:41.719692] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.719697] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.627 [2024-10-09 03:17:41.719707] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.719712] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.719716] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.627 [2024-10-09 03:17:41.719723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.627 [2024-10-09 03:17:41.719751] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.627 [2024-10-09 03:17:41.719801] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.627 [2024-10-09 03:17:41.719808] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.627 [2024-10-09 03:17:41.719812] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.719816] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.627 [2024-10-09 03:17:41.719826] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.719831] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.719835] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.627 [2024-10-09 03:17:41.719843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.627 [2024-10-09 03:17:41.719860] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.627 [2024-10-09 03:17:41.719907] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.627 [2024-10-09 03:17:41.719914] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.627 [2024-10-09 03:17:41.719918] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.719922] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.627 [2024-10-09 03:17:41.719932] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.719937] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.719941] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.627 [2024-10-09 03:17:41.719949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.627 [2024-10-09 03:17:41.719966] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.627 [2024-10-09 03:17:41.720016] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.627 [2024-10-09 03:17:41.720023] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.627 [2024-10-09 03:17:41.720027] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.720031] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.627 [2024-10-09 03:17:41.720041] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.720046] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.720050] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.627 [2024-10-09 03:17:41.720057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.627 [2024-10-09 03:17:41.720089] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.627 [2024-10-09 03:17:41.720136] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.627 [2024-10-09 03:17:41.720144] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.627 [2024-10-09 03:17:41.720148] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.720152] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.627 [2024-10-09 03:17:41.720162] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.720167] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.720171] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.627 [2024-10-09 03:17:41.720179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.627 [2024-10-09 03:17:41.720196] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.627 [2024-10-09 03:17:41.720241] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.627 [2024-10-09 03:17:41.720250] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.627 [2024-10-09 03:17:41.720254] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.720258] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.627 [2024-10-09 03:17:41.720269] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.720274] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.720278] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.627 [2024-10-09 03:17:41.720285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.627 [2024-10-09 03:17:41.720302] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.627 [2024-10-09 03:17:41.720350] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.627 [2024-10-09 03:17:41.720362] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.627 [2024-10-09 03:17:41.720366] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.720370] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.627 [2024-10-09 03:17:41.720381] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.720386] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.627 [2024-10-09 03:17:41.720390] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.627 [2024-10-09 03:17:41.720397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.627 [2024-10-09 03:17:41.720414] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.627 [2024-10-09 03:17:41.720465] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.627 [2024-10-09 03:17:41.720472] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.627 [2024-10-09 03:17:41.720475] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.720480] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.628 [2024-10-09 03:17:41.720490] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.720495] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.720499] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.628 [2024-10-09 03:17:41.720506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.628 [2024-10-09 03:17:41.720523] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.628 [2024-10-09 03:17:41.720570] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.628 [2024-10-09 03:17:41.720577] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.628 [2024-10-09 03:17:41.720581] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.720585] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.628 [2024-10-09 03:17:41.720595] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.720600] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.720604] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.628 [2024-10-09 03:17:41.720611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.628 [2024-10-09 03:17:41.720628] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.628 [2024-10-09 03:17:41.720675] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.628 [2024-10-09 03:17:41.720682] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.628 [2024-10-09 03:17:41.720686] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.720690] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.628 [2024-10-09 03:17:41.720700] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.720705] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.720709] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.628 [2024-10-09 03:17:41.720716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.628 [2024-10-09 03:17:41.720733] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.628 [2024-10-09 03:17:41.720777] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.628 [2024-10-09 03:17:41.720784] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.628 [2024-10-09 03:17:41.720788] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.720792] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.628 [2024-10-09 03:17:41.720803] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.720808] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.720812] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.628 [2024-10-09 03:17:41.720819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.628 [2024-10-09 03:17:41.720836] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.628 [2024-10-09 03:17:41.720880] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.628 [2024-10-09 03:17:41.720887] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.628 [2024-10-09 03:17:41.720891] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.720895] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.628 [2024-10-09 03:17:41.720905] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.720911] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.720915] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.628 [2024-10-09 03:17:41.720922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.628 [2024-10-09 03:17:41.720938] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.628 [2024-10-09 03:17:41.720982] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.628 [2024-10-09 03:17:41.720994] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.628 [2024-10-09 03:17:41.720998] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.721003] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.628 [2024-10-09 03:17:41.721013] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.721018] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.721023] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.628 [2024-10-09 03:17:41.721030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.628 [2024-10-09 03:17:41.721057] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.628 [2024-10-09 03:17:41.721110] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.628 [2024-10-09 03:17:41.721118] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.628 [2024-10-09 03:17:41.721121] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.721126] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.628 [2024-10-09 03:17:41.721136] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.721141] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.721145] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.628 [2024-10-09 03:17:41.721152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.628 [2024-10-09 03:17:41.721171] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.628 [2024-10-09 03:17:41.721219] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.628 [2024-10-09 03:17:41.721226] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.628 [2024-10-09 03:17:41.721230] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.721234] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.628 [2024-10-09 03:17:41.721244] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.721249] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.721253] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.628 [2024-10-09 03:17:41.721261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.628 [2024-10-09 03:17:41.721278] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.628 [2024-10-09 03:17:41.721325] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.628 [2024-10-09 03:17:41.721332] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.628 [2024-10-09 03:17:41.721335] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.721340] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.628 [2024-10-09 03:17:41.721350] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.721355] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.721359] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.628 [2024-10-09 03:17:41.721366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.628 [2024-10-09 03:17:41.721383] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.628 [2024-10-09 03:17:41.721431] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.628 [2024-10-09 03:17:41.721438] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.628 [2024-10-09 03:17:41.721442] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.721446] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.628 [2024-10-09 03:17:41.721456] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.721461] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.721465] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.628 [2024-10-09 03:17:41.721472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.628 [2024-10-09 03:17:41.721489] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.628 [2024-10-09 03:17:41.721537] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.628 [2024-10-09 03:17:41.721544] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.628 [2024-10-09 03:17:41.721547] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.721552] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.628 [2024-10-09 03:17:41.721562] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.721567] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.721571] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.628 [2024-10-09 03:17:41.721578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.628 [2024-10-09 03:17:41.721595] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.628 [2024-10-09 03:17:41.721639] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.628 [2024-10-09 03:17:41.721646] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.628 [2024-10-09 03:17:41.721650] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.721654] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.628 [2024-10-09 03:17:41.721664] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.721669] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.721673] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.628 [2024-10-09 03:17:41.721680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.628 [2024-10-09 03:17:41.721697] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.628 [2024-10-09 03:17:41.721745] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.628 [2024-10-09 03:17:41.721752] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.628 [2024-10-09 03:17:41.721755] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.721760] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.628 [2024-10-09 03:17:41.721770] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.721775] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.628 [2024-10-09 03:17:41.721779] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.628 [2024-10-09 03:17:41.721786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.629 [2024-10-09 03:17:41.721803] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.629 [2024-10-09 03:17:41.721847] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.629 [2024-10-09 03:17:41.721854] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.629 [2024-10-09 03:17:41.721858] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.721862] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.629 [2024-10-09 03:17:41.721872] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.721877] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.721881] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.629 [2024-10-09 03:17:41.721888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.629 [2024-10-09 03:17:41.721905] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.629 [2024-10-09 03:17:41.721955] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.629 [2024-10-09 03:17:41.721962] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.629 [2024-10-09 03:17:41.721966] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.721970] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.629 [2024-10-09 03:17:41.721991] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.721997] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.722001] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.629 [2024-10-09 03:17:41.722008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.629 [2024-10-09 03:17:41.722027] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.629 [2024-10-09 03:17:41.722097] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.629 [2024-10-09 03:17:41.722105] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.629 [2024-10-09 03:17:41.722109] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.722113] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.629 [2024-10-09 03:17:41.722124] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.722129] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.722133] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.629 [2024-10-09 03:17:41.722141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.629 [2024-10-09 03:17:41.722159] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.629 [2024-10-09 03:17:41.722207] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.629 [2024-10-09 03:17:41.722214] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.629 [2024-10-09 03:17:41.722218] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.722222] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.629 [2024-10-09 03:17:41.722232] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.722237] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.722241] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.629 [2024-10-09 03:17:41.722248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.629 [2024-10-09 03:17:41.722266] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.629 [2024-10-09 03:17:41.722316] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.629 [2024-10-09 03:17:41.722323] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.629 [2024-10-09 03:17:41.722327] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.722331] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.629 [2024-10-09 03:17:41.722341] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.722346] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.722350] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.629 [2024-10-09 03:17:41.722358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.629 [2024-10-09 03:17:41.722374] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.629 [2024-10-09 03:17:41.722427] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.629 [2024-10-09 03:17:41.722434] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.629 [2024-10-09 03:17:41.722438] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.722442] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.629 [2024-10-09 03:17:41.722453] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.722458] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.722462] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.629 [2024-10-09 03:17:41.722469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.629 [2024-10-09 03:17:41.722487] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.629 [2024-10-09 03:17:41.722534] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.629 [2024-10-09 03:17:41.722541] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.629 [2024-10-09 03:17:41.722545] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.722549] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.629 [2024-10-09 03:17:41.722559] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.722564] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.722568] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.629 [2024-10-09 03:17:41.722576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.629 [2024-10-09 03:17:41.722593] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.629 [2024-10-09 03:17:41.722640] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.629 [2024-10-09 03:17:41.722647] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.629 [2024-10-09 03:17:41.722651] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.722655] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.629 [2024-10-09 03:17:41.722665] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.722670] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.722674] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.629 [2024-10-09 03:17:41.722681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.629 [2024-10-09 03:17:41.722699] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.629 [2024-10-09 03:17:41.722749] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.629 [2024-10-09 03:17:41.722755] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.629 [2024-10-09 03:17:41.722759] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.722764] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.629 [2024-10-09 03:17:41.722774] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.722779] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.722783] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.629 [2024-10-09 03:17:41.722790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.629 [2024-10-09 03:17:41.722807] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.629 [2024-10-09 03:17:41.722852] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.629 [2024-10-09 03:17:41.722859] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.629 [2024-10-09 03:17:41.722863] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.722867] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.629 [2024-10-09 03:17:41.722877] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.722882] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.722886] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.629 [2024-10-09 03:17:41.722893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.629 [2024-10-09 03:17:41.722910] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.629 [2024-10-09 03:17:41.722952] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.629 [2024-10-09 03:17:41.722959] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.629 [2024-10-09 03:17:41.722963] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.722967] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.629 [2024-10-09 03:17:41.722978] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.722983] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.722987] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.629 [2024-10-09 03:17:41.722994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.629 [2024-10-09 03:17:41.723011] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.629 [2024-10-09 03:17:41.727084] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.629 [2024-10-09 03:17:41.727105] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.629 [2024-10-09 03:17:41.727126] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.727131] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.629 [2024-10-09 03:17:41.727146] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.727152] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.727156] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17ad750) 00:14:58.629 [2024-10-09 03:17:41.727164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.629 [2024-10-09 03:17:41.727189] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1811cc0, cid 3, qid 0 00:14:58.629 [2024-10-09 03:17:41.727244] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:58.629 [2024-10-09 03:17:41.727251] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:58.629 [2024-10-09 03:17:41.727255] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:58.629 [2024-10-09 03:17:41.727259] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1811cc0) on tqpair=0x17ad750 00:14:58.629 [2024-10-09 03:17:41.727267] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 8 milliseconds 00:14:58.629 0% 00:14:58.630 Data Units Read: 0 00:14:58.630 Data Units Written: 0 00:14:58.630 Host Read Commands: 0 00:14:58.630 Host Write Commands: 0 00:14:58.630 Controller Busy Time: 0 minutes 00:14:58.630 Power Cycles: 0 00:14:58.630 Power On Hours: 0 hours 00:14:58.630 Unsafe Shutdowns: 0 00:14:58.630 Unrecoverable Media Errors: 0 00:14:58.630 Lifetime Error Log Entries: 0 00:14:58.630 Warning Temperature Time: 0 minutes 00:14:58.630 Critical Temperature Time: 0 minutes 00:14:58.630 00:14:58.630 Number of Queues 00:14:58.630 ================ 00:14:58.630 Number of I/O Submission Queues: 127 00:14:58.630 Number of I/O Completion Queues: 127 00:14:58.630 00:14:58.630 Active Namespaces 00:14:58.630 ================= 00:14:58.630 Namespace ID:1 00:14:58.630 Error Recovery Timeout: Unlimited 00:14:58.630 Command Set Identifier: NVM (00h) 00:14:58.630 Deallocate: Supported 00:14:58.630 Deallocated/Unwritten Error: Not Supported 00:14:58.630 Deallocated Read Value: Unknown 00:14:58.630 Deallocate in Write Zeroes: Not Supported 00:14:58.630 Deallocated Guard Field: 0xFFFF 00:14:58.630 Flush: Supported 00:14:58.630 Reservation: Supported 00:14:58.630 Namespace Sharing Capabilities: Multiple Controllers 00:14:58.630 Size (in LBAs): 131072 (0GiB) 00:14:58.630 Capacity (in LBAs): 131072 (0GiB) 00:14:58.630 Utilization (in LBAs): 131072 (0GiB) 00:14:58.630 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:58.630 EUI64: ABCDEF0123456789 00:14:58.630 UUID: 610566d1-66f9-4a6c-8816-afdebfca08cf 00:14:58.630 Thin Provisioning: Not Supported 00:14:58.630 Per-NS Atomic Units: Yes 00:14:58.630 Atomic Boundary Size (Normal): 0 00:14:58.630 Atomic Boundary Size (PFail): 0 00:14:58.630 Atomic Boundary Offset: 0 00:14:58.630 Maximum Single Source Range Length: 65535 00:14:58.630 Maximum Copy Length: 65535 00:14:58.630 Maximum Source Range Count: 1 00:14:58.630 NGUID/EUI64 Never Reused: No 00:14:58.630 Namespace Write Protected: No 00:14:58.630 Number of LBA Formats: 1 00:14:58.630 Current LBA Format: LBA Format #00 00:14:58.630 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:58.630 00:14:58.630 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:14:58.630 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:58.630 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.630 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:58.630 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.630 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:58.630 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:14:58.630 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:58.630 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:14:58.630 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:58.630 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:14:58.630 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:58.630 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:58.630 rmmod nvme_tcp 00:14:58.630 rmmod nvme_fabrics 00:14:58.630 rmmod nvme_keyring 00:14:58.630 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:58.630 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:14:58.630 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:14:58.630 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 74068 ']' 00:14:58.630 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 74068 00:14:58.630 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 74068 ']' 00:14:58.630 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 74068 00:14:58.630 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:14:58.630 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:58.630 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74068 00:14:58.630 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:58.630 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:58.630 killing process with pid 74068 00:14:58.630 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74068' 00:14:58.630 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 74068 00:14:58.630 03:17:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 74068 00:14:58.889 03:17:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:58.889 03:17:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:58.889 03:17:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:58.889 03:17:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:14:58.889 03:17:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:14:58.889 03:17:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:58.889 03:17:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:14:58.889 03:17:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:58.889 03:17:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:58.889 03:17:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:58.889 03:17:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:59.149 03:17:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:59.149 03:17:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:59.149 03:17:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:59.149 03:17:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:59.149 03:17:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:59.149 03:17:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:59.149 03:17:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:59.149 03:17:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:59.149 03:17:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:59.149 03:17:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:59.149 03:17:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:59.149 03:17:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:59.149 03:17:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.149 03:17:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:59.149 03:17:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.149 03:17:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:14:59.149 00:14:59.149 real 0m2.257s 00:14:59.149 user 0m4.507s 00:14:59.149 sys 0m0.742s 00:14:59.149 03:17:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:59.149 ************************************ 00:14:59.149 03:17:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:59.149 END TEST nvmf_identify 00:14:59.149 ************************************ 00:14:59.149 03:17:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:59.149 03:17:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:59.149 03:17:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:59.149 03:17:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:59.409 ************************************ 00:14:59.409 START TEST nvmf_perf 00:14:59.409 ************************************ 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:59.409 * Looking for test storage... 00:14:59.409 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:59.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.409 --rc genhtml_branch_coverage=1 00:14:59.409 --rc genhtml_function_coverage=1 00:14:59.409 --rc genhtml_legend=1 00:14:59.409 --rc geninfo_all_blocks=1 00:14:59.409 --rc geninfo_unexecuted_blocks=1 00:14:59.409 00:14:59.409 ' 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:59.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.409 --rc genhtml_branch_coverage=1 00:14:59.409 --rc genhtml_function_coverage=1 00:14:59.409 --rc genhtml_legend=1 00:14:59.409 --rc geninfo_all_blocks=1 00:14:59.409 --rc geninfo_unexecuted_blocks=1 00:14:59.409 00:14:59.409 ' 00:14:59.409 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:59.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.409 --rc genhtml_branch_coverage=1 00:14:59.409 --rc genhtml_function_coverage=1 00:14:59.409 --rc genhtml_legend=1 00:14:59.409 --rc geninfo_all_blocks=1 00:14:59.409 --rc geninfo_unexecuted_blocks=1 00:14:59.409 00:14:59.409 ' 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:59.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.410 --rc genhtml_branch_coverage=1 00:14:59.410 --rc genhtml_function_coverage=1 00:14:59.410 --rc genhtml_legend=1 00:14:59.410 --rc geninfo_all_blocks=1 00:14:59.410 --rc geninfo_unexecuted_blocks=1 00:14:59.410 00:14:59.410 ' 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:59.410 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@458 -- # nvmf_veth_init 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:59.410 Cannot find device "nvmf_init_br" 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:59.410 Cannot find device "nvmf_init_br2" 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:14:59.410 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:59.670 Cannot find device "nvmf_tgt_br" 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:59.670 Cannot find device "nvmf_tgt_br2" 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:59.670 Cannot find device "nvmf_init_br" 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:59.670 Cannot find device "nvmf_init_br2" 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:59.670 Cannot find device "nvmf_tgt_br" 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:59.670 Cannot find device "nvmf_tgt_br2" 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:59.670 Cannot find device "nvmf_br" 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:59.670 Cannot find device "nvmf_init_if" 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:59.670 Cannot find device "nvmf_init_if2" 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:59.670 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:59.670 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:59.670 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:59.929 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:59.929 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:59.929 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:59.929 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:59.929 03:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:59.929 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:59.929 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:59.929 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:59.929 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:59.929 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:59.929 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:59.929 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:59.929 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:59.929 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:59.929 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:59.929 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:59.929 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:14:59.929 00:14:59.929 --- 10.0.0.3 ping statistics --- 00:14:59.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.929 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:14:59.929 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:59.929 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:59.929 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:14:59.929 00:14:59.929 --- 10.0.0.4 ping statistics --- 00:14:59.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.929 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:14:59.929 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:59.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:59.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:14:59.929 00:14:59.929 --- 10.0.0.1 ping statistics --- 00:14:59.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.929 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:14:59.929 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:59.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:59.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:14:59.929 00:14:59.929 --- 10.0.0.2 ping statistics --- 00:14:59.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.929 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:14:59.929 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:59.929 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # return 0 00:14:59.929 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:59.929 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:59.929 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:59.929 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:59.929 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:59.929 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:59.929 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:59.930 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:59.930 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:59.930 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:59.930 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:59.930 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=74319 00:14:59.930 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 74319 00:14:59.930 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 74319 ']' 00:14:59.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.930 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.930 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:59.930 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.930 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:59.930 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:59.930 03:17:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:59.930 [2024-10-09 03:17:43.172230] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:14:59.930 [2024-10-09 03:17:43.172337] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.189 [2024-10-09 03:17:43.314734] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:00.189 [2024-10-09 03:17:43.436692] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:00.189 [2024-10-09 03:17:43.436997] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:00.189 [2024-10-09 03:17:43.437272] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:00.189 [2024-10-09 03:17:43.437490] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:00.189 [2024-10-09 03:17:43.437619] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:00.189 [2024-10-09 03:17:43.439114] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.189 [2024-10-09 03:17:43.439244] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:00.189 [2024-10-09 03:17:43.439324] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:15:00.189 [2024-10-09 03:17:43.439325] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.448 [2024-10-09 03:17:43.500957] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:01.015 03:17:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:01.015 03:17:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:15:01.015 03:17:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:01.015 03:17:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:01.015 03:17:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:01.015 03:17:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:01.015 03:17:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:01.015 03:17:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:15:01.583 03:17:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:15:01.583 03:17:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:15:01.842 03:17:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:15:01.842 03:17:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:02.101 03:17:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:15:02.101 03:17:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:15:02.101 03:17:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:15:02.101 03:17:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:15:02.101 03:17:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:02.362 [2024-10-09 03:17:45.545730] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:02.362 03:17:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:02.624 03:17:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:02.624 03:17:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:02.882 03:17:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:02.882 03:17:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:15:03.140 03:17:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:03.399 [2024-10-09 03:17:46.585970] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:03.399 03:17:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:03.658 03:17:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:15:03.658 03:17:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:03.658 03:17:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:15:03.658 03:17:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:05.036 Initializing NVMe Controllers 00:15:05.036 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:05.036 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:05.036 Initialization complete. Launching workers. 00:15:05.036 ======================================================== 00:15:05.036 Latency(us) 00:15:05.036 Device Information : IOPS MiB/s Average min max 00:15:05.036 PCIE (0000:00:10.0) NSID 1 from core 0: 22046.30 86.12 1450.57 373.04 7500.86 00:15:05.036 ======================================================== 00:15:05.036 Total : 22046.30 86.12 1450.57 373.04 7500.86 00:15:05.036 00:15:05.036 03:17:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:05.973 Initializing NVMe Controllers 00:15:05.973 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:05.973 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:05.973 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:05.973 Initialization complete. Launching workers. 00:15:05.973 ======================================================== 00:15:05.973 Latency(us) 00:15:05.973 Device Information : IOPS MiB/s Average min max 00:15:05.973 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3767.32 14.72 265.06 95.45 4311.51 00:15:05.973 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.75 0.48 8080.64 7015.71 12096.59 00:15:05.973 ======================================================== 00:15:05.973 Total : 3891.07 15.20 513.62 95.45 12096.59 00:15:05.973 00:15:06.232 03:17:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:07.623 Initializing NVMe Controllers 00:15:07.623 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:07.623 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:07.623 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:07.623 Initialization complete. Launching workers. 00:15:07.623 ======================================================== 00:15:07.623 Latency(us) 00:15:07.623 Device Information : IOPS MiB/s Average min max 00:15:07.624 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9423.20 36.81 3396.35 463.75 7877.60 00:15:07.624 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3988.16 15.58 8063.41 5731.40 15722.42 00:15:07.624 ======================================================== 00:15:07.624 Total : 13411.37 52.39 4784.20 463.75 15722.42 00:15:07.624 00:15:07.624 03:17:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:07.624 03:17:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:10.156 Initializing NVMe Controllers 00:15:10.157 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:10.157 Controller IO queue size 128, less than required. 00:15:10.157 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:10.157 Controller IO queue size 128, less than required. 00:15:10.157 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:10.157 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:10.157 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:10.157 Initialization complete. Launching workers. 00:15:10.157 ======================================================== 00:15:10.157 Latency(us) 00:15:10.157 Device Information : IOPS MiB/s Average min max 00:15:10.157 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1968.05 492.01 65629.12 27302.24 95029.24 00:15:10.157 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 672.64 168.16 201450.44 90834.54 317552.00 00:15:10.157 ======================================================== 00:15:10.157 Total : 2640.69 660.17 100225.60 27302.24 317552.00 00:15:10.157 00:15:10.157 03:17:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:15:10.157 Initializing NVMe Controllers 00:15:10.157 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:10.157 Controller IO queue size 128, less than required. 00:15:10.157 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:10.157 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:10.157 Controller IO queue size 128, less than required. 00:15:10.157 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:10.157 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:10.157 WARNING: Some requested NVMe devices were skipped 00:15:10.157 No valid NVMe controllers or AIO or URING devices found 00:15:10.157 03:17:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:15:12.691 Initializing NVMe Controllers 00:15:12.691 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:12.691 Controller IO queue size 128, less than required. 00:15:12.691 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:12.691 Controller IO queue size 128, less than required. 00:15:12.691 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:12.691 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:12.691 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:12.691 Initialization complete. Launching workers. 00:15:12.691 00:15:12.691 ==================== 00:15:12.691 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:12.691 TCP transport: 00:15:12.691 polls: 8568 00:15:12.691 idle_polls: 5186 00:15:12.691 sock_completions: 3382 00:15:12.691 nvme_completions: 6119 00:15:12.691 submitted_requests: 9216 00:15:12.691 queued_requests: 1 00:15:12.691 00:15:12.691 ==================== 00:15:12.691 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:12.691 TCP transport: 00:15:12.691 polls: 8620 00:15:12.691 idle_polls: 5344 00:15:12.691 sock_completions: 3276 00:15:12.691 nvme_completions: 6449 00:15:12.691 submitted_requests: 9760 00:15:12.691 queued_requests: 1 00:15:12.691 ======================================================== 00:15:12.691 Latency(us) 00:15:12.691 Device Information : IOPS MiB/s Average min max 00:15:12.691 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1528.50 382.12 86117.97 52131.48 150180.09 00:15:12.691 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1610.94 402.74 80829.49 34411.75 142738.43 00:15:12.691 ======================================================== 00:15:12.691 Total : 3139.44 784.86 83404.29 34411.75 150180.09 00:15:12.691 00:15:12.691 03:17:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:15:12.950 03:17:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:13.209 03:17:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:15:13.209 03:17:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:13.209 03:17:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:15:13.209 03:17:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:13.209 03:17:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:15:13.209 03:17:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:13.209 03:17:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:15:13.209 03:17:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:13.209 03:17:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:13.209 rmmod nvme_tcp 00:15:13.209 rmmod nvme_fabrics 00:15:13.209 rmmod nvme_keyring 00:15:13.209 03:17:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:13.209 03:17:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:15:13.209 03:17:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:15:13.209 03:17:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 74319 ']' 00:15:13.209 03:17:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 74319 00:15:13.209 03:17:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 74319 ']' 00:15:13.209 03:17:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 74319 00:15:13.209 03:17:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:15:13.209 03:17:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:13.209 03:17:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74319 00:15:13.209 killing process with pid 74319 00:15:13.209 03:17:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:13.209 03:17:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:13.210 03:17:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74319' 00:15:13.210 03:17:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 74319 00:15:13.210 03:17:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 74319 00:15:14.145 03:17:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:14.145 03:17:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:14.145 03:17:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:14.145 03:17:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:15:14.145 03:17:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:14.145 03:17:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:15:14.145 03:17:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:15:14.145 03:17:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:14.145 03:17:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:14.145 03:17:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:14.145 03:17:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:14.145 03:17:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:14.145 03:17:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:14.145 03:17:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:14.145 03:17:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:14.145 03:17:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:14.145 03:17:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:14.145 03:17:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:14.145 03:17:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:14.145 03:17:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:14.145 03:17:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:14.145 03:17:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:14.145 03:17:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:14.145 03:17:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.145 03:17:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:14.145 03:17:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.145 03:17:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:15:14.145 00:15:14.145 real 0m14.940s 00:15:14.145 user 0m53.512s 00:15:14.145 sys 0m4.187s 00:15:14.145 03:17:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:14.146 03:17:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:14.146 ************************************ 00:15:14.146 END TEST nvmf_perf 00:15:14.146 ************************************ 00:15:14.406 03:17:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:14.406 03:17:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:14.406 03:17:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:14.406 03:17:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:14.406 ************************************ 00:15:14.406 START TEST nvmf_fio_host 00:15:14.406 ************************************ 00:15:14.406 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:14.406 * Looking for test storage... 00:15:14.406 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:14.406 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:14.406 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:14.406 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:15:14.406 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:14.406 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:14.406 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:14.406 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:14.406 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:15:14.406 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:15:14.406 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:15:14.406 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:15:14.406 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:15:14.406 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:14.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.407 --rc genhtml_branch_coverage=1 00:15:14.407 --rc genhtml_function_coverage=1 00:15:14.407 --rc genhtml_legend=1 00:15:14.407 --rc geninfo_all_blocks=1 00:15:14.407 --rc geninfo_unexecuted_blocks=1 00:15:14.407 00:15:14.407 ' 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:14.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.407 --rc genhtml_branch_coverage=1 00:15:14.407 --rc genhtml_function_coverage=1 00:15:14.407 --rc genhtml_legend=1 00:15:14.407 --rc geninfo_all_blocks=1 00:15:14.407 --rc geninfo_unexecuted_blocks=1 00:15:14.407 00:15:14.407 ' 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:14.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.407 --rc genhtml_branch_coverage=1 00:15:14.407 --rc genhtml_function_coverage=1 00:15:14.407 --rc genhtml_legend=1 00:15:14.407 --rc geninfo_all_blocks=1 00:15:14.407 --rc geninfo_unexecuted_blocks=1 00:15:14.407 00:15:14.407 ' 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:14.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.407 --rc genhtml_branch_coverage=1 00:15:14.407 --rc genhtml_function_coverage=1 00:15:14.407 --rc genhtml_legend=1 00:15:14.407 --rc geninfo_all_blocks=1 00:15:14.407 --rc geninfo_unexecuted_blocks=1 00:15:14.407 00:15:14.407 ' 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:14.407 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:14.408 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@458 -- # nvmf_veth_init 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:14.408 Cannot find device "nvmf_init_br" 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:14.408 Cannot find device "nvmf_init_br2" 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:14.408 Cannot find device "nvmf_tgt_br" 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:15:14.408 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:14.667 Cannot find device "nvmf_tgt_br2" 00:15:14.667 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:15:14.667 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:14.667 Cannot find device "nvmf_init_br" 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:14.668 Cannot find device "nvmf_init_br2" 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:14.668 Cannot find device "nvmf_tgt_br" 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:14.668 Cannot find device "nvmf_tgt_br2" 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:14.668 Cannot find device "nvmf_br" 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:14.668 Cannot find device "nvmf_init_if" 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:14.668 Cannot find device "nvmf_init_if2" 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:14.668 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:14.668 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:14.668 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:14.927 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:14.927 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:14.927 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:14.927 03:17:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:14.927 03:17:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:14.927 03:17:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:14.927 03:17:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:14.927 03:17:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:14.927 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:14.927 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:15:14.927 00:15:14.927 --- 10.0.0.3 ping statistics --- 00:15:14.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.927 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:15:14.927 03:17:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:14.927 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:14.927 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:15:14.927 00:15:14.927 --- 10.0.0.4 ping statistics --- 00:15:14.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.927 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:15:14.927 03:17:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:14.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:14.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:15:14.927 00:15:14.927 --- 10.0.0.1 ping statistics --- 00:15:14.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.927 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:15:14.927 03:17:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:14.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:14.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:15:14.927 00:15:14.927 --- 10.0.0.2 ping statistics --- 00:15:14.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.927 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:15:14.927 03:17:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:14.927 03:17:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # return 0 00:15:14.927 03:17:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:14.927 03:17:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:14.927 03:17:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:14.927 03:17:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:14.927 03:17:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:14.927 03:17:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:14.927 03:17:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:14.927 03:17:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:15:14.927 03:17:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:14.927 03:17:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:14.927 03:17:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:14.927 03:17:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74794 00:15:14.927 03:17:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:14.927 03:17:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74794 00:15:14.927 03:17:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:14.927 03:17:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 74794 ']' 00:15:14.927 03:17:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.927 03:17:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:14.927 03:17:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.927 03:17:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:14.927 03:17:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:14.927 [2024-10-09 03:17:58.137657] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:15:14.927 [2024-10-09 03:17:58.137775] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.186 [2024-10-09 03:17:58.280467] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:15.186 [2024-10-09 03:17:58.404330] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:15.186 [2024-10-09 03:17:58.404414] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:15.186 [2024-10-09 03:17:58.404431] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:15.186 [2024-10-09 03:17:58.404451] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:15.186 [2024-10-09 03:17:58.404462] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:15.186 [2024-10-09 03:17:58.405879] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.186 [2024-10-09 03:17:58.406124] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:15.186 [2024-10-09 03:17:58.406373] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.186 [2024-10-09 03:17:58.406197] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:15:15.186 [2024-10-09 03:17:58.466433] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:16.123 03:17:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:16.123 03:17:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:15:16.123 03:17:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:16.123 [2024-10-09 03:17:59.300970] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:16.123 03:17:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:16.123 03:17:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:16.123 03:17:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:16.123 03:17:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:16.690 Malloc1 00:15:16.690 03:17:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:16.949 03:18:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:17.207 03:18:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:17.466 [2024-10-09 03:18:00.631395] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:17.466 03:18:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:17.725 03:18:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:17.725 03:18:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:17.725 03:18:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:17.725 03:18:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:17.725 03:18:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:17.725 03:18:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:17.725 03:18:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:17.725 03:18:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:15:17.725 03:18:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:17.725 03:18:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:17.725 03:18:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:17.725 03:18:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:15:17.725 03:18:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:17.725 03:18:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:17.725 03:18:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:17.726 03:18:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:17.726 03:18:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:17.726 03:18:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:15:17.726 03:18:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:17.726 03:18:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:17.726 03:18:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:17.726 03:18:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:17.726 03:18:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:17.984 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:17.984 fio-3.35 00:15:17.984 Starting 1 thread 00:15:20.517 00:15:20.517 test: (groupid=0, jobs=1): err= 0: pid=74873: Wed Oct 9 03:18:03 2024 00:15:20.517 read: IOPS=8800, BW=34.4MiB/s (36.0MB/s)(69.0MiB/2007msec) 00:15:20.517 slat (nsec): min=1894, max=289991, avg=2558.72, stdev=3230.04 00:15:20.517 clat (usec): min=2304, max=13627, avg=7571.66, stdev=608.33 00:15:20.517 lat (usec): min=2357, max=13630, avg=7574.21, stdev=608.14 00:15:20.517 clat percentiles (usec): 00:15:20.517 | 1.00th=[ 6325], 5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7111], 00:15:20.517 | 30.00th=[ 7242], 40.00th=[ 7439], 50.00th=[ 7570], 60.00th=[ 7701], 00:15:20.517 | 70.00th=[ 7832], 80.00th=[ 8029], 90.00th=[ 8291], 95.00th=[ 8586], 00:15:20.517 | 99.00th=[ 8979], 99.50th=[ 9241], 99.90th=[11863], 99.95th=[12780], 00:15:20.517 | 99.99th=[13566] 00:15:20.517 bw ( KiB/s): min=34520, max=36064, per=100.00%, avg=35200.00, stdev=663.98, samples=4 00:15:20.517 iops : min= 8630, max= 9016, avg=8800.00, stdev=166.00, samples=4 00:15:20.517 write: IOPS=8810, BW=34.4MiB/s (36.1MB/s)(69.1MiB/2007msec); 0 zone resets 00:15:20.517 slat (nsec): min=1978, max=203626, avg=2622.17, stdev=2270.78 00:15:20.517 clat (usec): min=2178, max=13660, avg=6906.56, stdev=559.67 00:15:20.517 lat (usec): min=2190, max=13662, avg=6909.18, stdev=559.61 00:15:20.517 clat percentiles (usec): 00:15:20.517 | 1.00th=[ 5735], 5.00th=[ 6128], 10.00th=[ 6259], 20.00th=[ 6521], 00:15:20.517 | 30.00th=[ 6652], 40.00th=[ 6783], 50.00th=[ 6915], 60.00th=[ 6980], 00:15:20.517 | 70.00th=[ 7177], 80.00th=[ 7308], 90.00th=[ 7570], 95.00th=[ 7767], 00:15:20.517 | 99.00th=[ 8225], 99.50th=[ 8455], 99.90th=[11994], 99.95th=[12649], 00:15:20.517 | 99.99th=[13435] 00:15:20.517 bw ( KiB/s): min=34992, max=35440, per=99.98%, avg=35236.00, stdev=186.19, samples=4 00:15:20.517 iops : min= 8748, max= 8860, avg=8809.00, stdev=46.55, samples=4 00:15:20.517 lat (msec) : 4=0.12%, 10=99.67%, 20=0.21% 00:15:20.517 cpu : usr=68.99%, sys=24.03%, ctx=7, majf=0, minf=6 00:15:20.517 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:20.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:20.517 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:20.518 issued rwts: total=17662,17683,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:20.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:20.518 00:15:20.518 Run status group 0 (all jobs): 00:15:20.518 READ: bw=34.4MiB/s (36.0MB/s), 34.4MiB/s-34.4MiB/s (36.0MB/s-36.0MB/s), io=69.0MiB (72.3MB), run=2007-2007msec 00:15:20.518 WRITE: bw=34.4MiB/s (36.1MB/s), 34.4MiB/s-34.4MiB/s (36.1MB/s-36.1MB/s), io=69.1MiB (72.4MB), run=2007-2007msec 00:15:20.518 03:18:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:20.518 03:18:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:20.518 03:18:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:20.518 03:18:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:20.518 03:18:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:20.518 03:18:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:20.518 03:18:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:15:20.518 03:18:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:20.518 03:18:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:20.518 03:18:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:20.518 03:18:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:15:20.518 03:18:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:20.518 03:18:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:20.518 03:18:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:20.518 03:18:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:20.518 03:18:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:20.518 03:18:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:20.518 03:18:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:15:20.518 03:18:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:20.518 03:18:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:20.518 03:18:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:20.518 03:18:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:20.518 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:20.518 fio-3.35 00:15:20.518 Starting 1 thread 00:15:23.051 00:15:23.051 test: (groupid=0, jobs=1): err= 0: pid=74922: Wed Oct 9 03:18:05 2024 00:15:23.051 read: IOPS=7939, BW=124MiB/s (130MB/s)(249MiB/2008msec) 00:15:23.051 slat (usec): min=2, max=113, avg= 3.86, stdev= 2.30 00:15:23.051 clat (usec): min=2175, max=17995, avg=8951.06, stdev=2479.54 00:15:23.051 lat (usec): min=2179, max=17999, avg=8954.92, stdev=2479.62 00:15:23.051 clat percentiles (usec): 00:15:23.051 | 1.00th=[ 4228], 5.00th=[ 5145], 10.00th=[ 5800], 20.00th=[ 6783], 00:15:23.051 | 30.00th=[ 7504], 40.00th=[ 8160], 50.00th=[ 8848], 60.00th=[ 9634], 00:15:23.051 | 70.00th=[10290], 80.00th=[10945], 90.00th=[12125], 95.00th=[13042], 00:15:23.051 | 99.00th=[15401], 99.50th=[16909], 99.90th=[17433], 99.95th=[17695], 00:15:23.051 | 99.99th=[17957] 00:15:23.051 bw ( KiB/s): min=58880, max=72736, per=51.08%, avg=64888.00, stdev=6750.43, samples=4 00:15:23.051 iops : min= 3680, max= 4546, avg=4055.50, stdev=421.90, samples=4 00:15:23.051 write: IOPS=4539, BW=70.9MiB/s (74.4MB/s)(132MiB/1867msec); 0 zone resets 00:15:23.051 slat (usec): min=32, max=317, avg=39.48, stdev= 8.66 00:15:23.051 clat (usec): min=4803, max=23084, avg=12611.35, stdev=2427.79 00:15:23.051 lat (usec): min=4837, max=23125, avg=12650.82, stdev=2428.63 00:15:23.051 clat percentiles (usec): 00:15:23.051 | 1.00th=[ 7898], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[10421], 00:15:23.051 | 30.00th=[11076], 40.00th=[11731], 50.00th=[12518], 60.00th=[13173], 00:15:23.051 | 70.00th=[13829], 80.00th=[14746], 90.00th=[15795], 95.00th=[16909], 00:15:23.051 | 99.00th=[18744], 99.50th=[20317], 99.90th=[22152], 99.95th=[22152], 00:15:23.051 | 99.99th=[23200] 00:15:23.051 bw ( KiB/s): min=61856, max=75424, per=92.93%, avg=67496.00, stdev=6584.14, samples=4 00:15:23.051 iops : min= 3866, max= 4714, avg=4218.50, stdev=411.51, samples=4 00:15:23.051 lat (msec) : 4=0.41%, 10=47.33%, 20=52.06%, 50=0.20% 00:15:23.051 cpu : usr=81.56%, sys=14.50%, ctx=5, majf=0, minf=5 00:15:23.051 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:15:23.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:23.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:23.051 issued rwts: total=15942,8475,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:23.051 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:23.051 00:15:23.051 Run status group 0 (all jobs): 00:15:23.051 READ: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=249MiB (261MB), run=2008-2008msec 00:15:23.051 WRITE: bw=70.9MiB/s (74.4MB/s), 70.9MiB/s-70.9MiB/s (74.4MB/s-74.4MB/s), io=132MiB (139MB), run=1867-1867msec 00:15:23.051 03:18:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:23.051 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:15:23.051 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:23.051 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:23.051 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:15:23.051 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:23.051 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:15:23.051 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:23.051 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:15:23.051 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:23.051 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:23.051 rmmod nvme_tcp 00:15:23.051 rmmod nvme_fabrics 00:15:23.051 rmmod nvme_keyring 00:15:23.051 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:23.051 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:15:23.051 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:15:23.051 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 74794 ']' 00:15:23.051 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 74794 00:15:23.051 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 74794 ']' 00:15:23.051 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 74794 00:15:23.051 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:15:23.051 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:23.051 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74794 00:15:23.051 killing process with pid 74794 00:15:23.051 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:23.051 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:23.051 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74794' 00:15:23.051 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 74794 00:15:23.051 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 74794 00:15:23.310 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:23.310 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:23.310 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:23.310 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:15:23.310 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:23.310 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:15:23.310 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:15:23.310 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:23.310 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:23.310 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:23.310 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:23.310 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:23.568 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:23.568 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:23.568 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:23.568 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:23.568 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:23.568 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:23.569 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:23.569 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:23.569 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:23.569 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:23.569 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:23.569 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.569 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:23.569 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.569 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:15:23.569 00:15:23.569 real 0m9.367s 00:15:23.569 user 0m37.114s 00:15:23.569 sys 0m2.518s 00:15:23.569 ************************************ 00:15:23.569 END TEST nvmf_fio_host 00:15:23.569 ************************************ 00:15:23.569 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:23.569 03:18:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:23.569 03:18:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:23.569 03:18:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:23.569 03:18:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:23.569 03:18:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:23.828 ************************************ 00:15:23.828 START TEST nvmf_failover 00:15:23.828 ************************************ 00:15:23.828 03:18:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:23.828 * Looking for test storage... 00:15:23.828 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:23.828 03:18:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:23.828 03:18:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:15:23.829 03:18:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:23.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.829 --rc genhtml_branch_coverage=1 00:15:23.829 --rc genhtml_function_coverage=1 00:15:23.829 --rc genhtml_legend=1 00:15:23.829 --rc geninfo_all_blocks=1 00:15:23.829 --rc geninfo_unexecuted_blocks=1 00:15:23.829 00:15:23.829 ' 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:23.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.829 --rc genhtml_branch_coverage=1 00:15:23.829 --rc genhtml_function_coverage=1 00:15:23.829 --rc genhtml_legend=1 00:15:23.829 --rc geninfo_all_blocks=1 00:15:23.829 --rc geninfo_unexecuted_blocks=1 00:15:23.829 00:15:23.829 ' 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:23.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.829 --rc genhtml_branch_coverage=1 00:15:23.829 --rc genhtml_function_coverage=1 00:15:23.829 --rc genhtml_legend=1 00:15:23.829 --rc geninfo_all_blocks=1 00:15:23.829 --rc geninfo_unexecuted_blocks=1 00:15:23.829 00:15:23.829 ' 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:23.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.829 --rc genhtml_branch_coverage=1 00:15:23.829 --rc genhtml_function_coverage=1 00:15:23.829 --rc genhtml_legend=1 00:15:23.829 --rc geninfo_all_blocks=1 00:15:23.829 --rc geninfo_unexecuted_blocks=1 00:15:23.829 00:15:23.829 ' 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:23.829 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:15:23.829 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:15:23.830 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:15:23.830 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@458 -- # nvmf_veth_init 00:15:23.830 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:23.830 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:23.830 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:23.830 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:23.830 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:23.830 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:23.830 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:23.830 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:23.830 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:23.830 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:23.830 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:23.830 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:23.830 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:23.830 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:23.830 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:23.830 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:23.830 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:23.830 Cannot find device "nvmf_init_br" 00:15:23.830 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:15:23.830 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:24.089 Cannot find device "nvmf_init_br2" 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:24.089 Cannot find device "nvmf_tgt_br" 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:24.089 Cannot find device "nvmf_tgt_br2" 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:24.089 Cannot find device "nvmf_init_br" 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:24.089 Cannot find device "nvmf_init_br2" 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:24.089 Cannot find device "nvmf_tgt_br" 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:24.089 Cannot find device "nvmf_tgt_br2" 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:24.089 Cannot find device "nvmf_br" 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:24.089 Cannot find device "nvmf_init_if" 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:24.089 Cannot find device "nvmf_init_if2" 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:24.089 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:24.089 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:24.089 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:24.348 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:24.348 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:24.348 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:24.348 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:24.348 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:24.348 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:24.348 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:24.348 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:24.349 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:24.349 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:15:24.349 00:15:24.349 --- 10.0.0.3 ping statistics --- 00:15:24.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.349 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:24.349 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:24.349 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:15:24.349 00:15:24.349 --- 10.0.0.4 ping statistics --- 00:15:24.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.349 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:24.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:24.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:15:24.349 00:15:24.349 --- 10.0.0.1 ping statistics --- 00:15:24.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.349 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:24.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:24.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:15:24.349 00:15:24.349 --- 10.0.0.2 ping statistics --- 00:15:24.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.349 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # return 0 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=75186 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 75186 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 75186 ']' 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:24.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:24.349 03:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:24.349 [2024-10-09 03:18:07.599752] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:15:24.349 [2024-10-09 03:18:07.599836] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:24.608 [2024-10-09 03:18:07.742374] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:24.608 [2024-10-09 03:18:07.856740] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:24.608 [2024-10-09 03:18:07.857118] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:24.608 [2024-10-09 03:18:07.857286] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:24.608 [2024-10-09 03:18:07.857438] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:24.608 [2024-10-09 03:18:07.857595] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:24.608 [2024-10-09 03:18:07.858250] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:24.608 [2024-10-09 03:18:07.858331] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:15:24.608 [2024-10-09 03:18:07.858339] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:24.866 [2024-10-09 03:18:07.918298] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:25.432 03:18:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:25.432 03:18:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:15:25.432 03:18:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:25.432 03:18:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:25.432 03:18:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:25.432 03:18:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.432 03:18:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:25.691 [2024-10-09 03:18:08.907199] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:25.691 03:18:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:25.949 Malloc0 00:15:25.949 03:18:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:26.207 03:18:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:26.466 03:18:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:26.724 [2024-10-09 03:18:09.973911] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:26.724 03:18:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:26.982 [2024-10-09 03:18:10.262116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:26.982 03:18:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:27.549 [2024-10-09 03:18:10.614509] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:15:27.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:27.549 03:18:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75244 00:15:27.549 03:18:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:27.549 03:18:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:27.549 03:18:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75244 /var/tmp/bdevperf.sock 00:15:27.549 03:18:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 75244 ']' 00:15:27.549 03:18:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:27.549 03:18:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:27.549 03:18:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:27.549 03:18:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:27.549 03:18:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:28.512 03:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:28.512 03:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:15:28.512 03:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:28.783 NVMe0n1 00:15:29.042 03:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:29.300 00:15:29.300 03:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75273 00:15:29.300 03:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:29.300 03:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:15:30.237 03:18:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:30.495 03:18:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:15:33.780 03:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:33.780 00:15:33.780 03:18:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:34.347 03:18:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:15:37.636 03:18:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:37.636 [2024-10-09 03:18:20.667105] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:37.636 03:18:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:15:38.573 03:18:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:38.833 03:18:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75273 00:15:45.406 { 00:15:45.406 "results": [ 00:15:45.406 { 00:15:45.406 "job": "NVMe0n1", 00:15:45.406 "core_mask": "0x1", 00:15:45.406 "workload": "verify", 00:15:45.406 "status": "finished", 00:15:45.406 "verify_range": { 00:15:45.406 "start": 0, 00:15:45.406 "length": 16384 00:15:45.406 }, 00:15:45.406 "queue_depth": 128, 00:15:45.406 "io_size": 4096, 00:15:45.406 "runtime": 15.009364, 00:15:45.406 "iops": 9059.277928098752, 00:15:45.406 "mibps": 35.38780440663575, 00:15:45.406 "io_failed": 3477, 00:15:45.406 "io_timeout": 0, 00:15:45.406 "avg_latency_us": 13745.701660993987, 00:15:45.406 "min_latency_us": 610.6763636363636, 00:15:45.406 "max_latency_us": 28716.683636363636 00:15:45.406 } 00:15:45.406 ], 00:15:45.406 "core_count": 1 00:15:45.406 } 00:15:45.406 03:18:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75244 00:15:45.406 03:18:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75244 ']' 00:15:45.406 03:18:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75244 00:15:45.406 03:18:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:15:45.406 03:18:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:45.406 03:18:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75244 00:15:45.406 killing process with pid 75244 00:15:45.406 03:18:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:45.406 03:18:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:45.406 03:18:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75244' 00:15:45.406 03:18:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 75244 00:15:45.406 03:18:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 75244 00:15:45.406 03:18:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:45.406 [2024-10-09 03:18:10.692670] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:15:45.406 [2024-10-09 03:18:10.692796] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75244 ] 00:15:45.406 [2024-10-09 03:18:10.831438] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.406 [2024-10-09 03:18:10.936402] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.406 [2024-10-09 03:18:10.991449] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:45.406 Running I/O for 15 seconds... 00:15:45.406 7317.00 IOPS, 28.58 MiB/s [2024-10-09T03:18:28.709Z] [2024-10-09 03:18:13.681610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:68752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.406 [2024-10-09 03:18:13.681680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.406 [2024-10-09 03:18:13.681728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:68832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.406 [2024-10-09 03:18:13.681744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.406 [2024-10-09 03:18:13.681761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:68840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.406 [2024-10-09 03:18:13.681775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.406 [2024-10-09 03:18:13.681791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:68848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.406 [2024-10-09 03:18:13.681805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.406 [2024-10-09 03:18:13.681821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:68856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.406 [2024-10-09 03:18:13.681835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.406 [2024-10-09 03:18:13.681851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:68864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.406 [2024-10-09 03:18:13.681865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.406 [2024-10-09 03:18:13.681880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:68872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.406 [2024-10-09 03:18:13.681895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.406 [2024-10-09 03:18:13.681910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:68880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.406 [2024-10-09 03:18:13.681924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.406 [2024-10-09 03:18:13.681940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:68888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.406 [2024-10-09 03:18:13.681954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.406 [2024-10-09 03:18:13.681970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:68760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.406 [2024-10-09 03:18:13.681984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.406 [2024-10-09 03:18:13.682009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652660 is same with the state(6) to be set 00:15:45.406 [2024-10-09 03:18:13.682089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.406 [2024-10-09 03:18:13.682102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.406 [2024-10-09 03:18:13.682114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68768 len:8 PRP1 0x0 PRP2 0x0 00:15:45.406 [2024-10-09 03:18:13.682128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.406 [2024-10-09 03:18:13.682143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.406 [2024-10-09 03:18:13.682154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.406 [2024-10-09 03:18:13.682165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68896 len:8 PRP1 0x0 PRP2 0x0 00:15:45.406 [2024-10-09 03:18:13.682179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.406 [2024-10-09 03:18:13.682193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.406 [2024-10-09 03:18:13.682203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.406 [2024-10-09 03:18:13.682214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68904 len:8 PRP1 0x0 PRP2 0x0 00:15:45.406 [2024-10-09 03:18:13.682227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.406 [2024-10-09 03:18:13.682241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.406 [2024-10-09 03:18:13.682260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.406 [2024-10-09 03:18:13.682271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68912 len:8 PRP1 0x0 PRP2 0x0 00:15:45.406 [2024-10-09 03:18:13.682285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.406 [2024-10-09 03:18:13.682299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.406 [2024-10-09 03:18:13.682309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.406 [2024-10-09 03:18:13.682320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68920 len:8 PRP1 0x0 PRP2 0x0 00:15:45.406 [2024-10-09 03:18:13.682363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.406 [2024-10-09 03:18:13.682376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.406 [2024-10-09 03:18:13.682386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.406 [2024-10-09 03:18:13.682396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68928 len:8 PRP1 0x0 PRP2 0x0 00:15:45.406 [2024-10-09 03:18:13.682410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.407 [2024-10-09 03:18:13.682423] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.407 [2024-10-09 03:18:13.682432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.407 [2024-10-09 03:18:13.682442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68936 len:8 PRP1 0x0 PRP2 0x0 00:15:45.407 [2024-10-09 03:18:13.682455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.407 [2024-10-09 03:18:13.682468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.407 [2024-10-09 03:18:13.682478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.407 [2024-10-09 03:18:13.682488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68944 len:8 PRP1 0x0 PRP2 0x0 00:15:45.407 [2024-10-09 03:18:13.682509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.407 [2024-10-09 03:18:13.682523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.407 [2024-10-09 03:18:13.682533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.407 [2024-10-09 03:18:13.682543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68952 len:8 PRP1 0x0 PRP2 0x0 00:15:45.407 [2024-10-09 03:18:13.682555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.407 [2024-10-09 03:18:13.682568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.407 [2024-10-09 03:18:13.682578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.407 [2024-10-09 03:18:13.682588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68960 len:8 PRP1 0x0 PRP2 0x0 00:15:45.407 [2024-10-09 03:18:13.682601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.407 [2024-10-09 03:18:13.682614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.407 [2024-10-09 03:18:13.682624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.407 [2024-10-09 03:18:13.682634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68968 len:8 PRP1 0x0 PRP2 0x0 00:15:45.407 [2024-10-09 03:18:13.682647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.407 [2024-10-09 03:18:13.682660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.407 [2024-10-09 03:18:13.682676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.407 [2024-10-09 03:18:13.682686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68976 len:8 PRP1 0x0 PRP2 0x0 00:15:45.407 [2024-10-09 03:18:13.682699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.407 [2024-10-09 03:18:13.682712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.407 [2024-10-09 03:18:13.682722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.407 [2024-10-09 03:18:13.682733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68984 len:8 PRP1 0x0 PRP2 0x0 00:15:45.407 [2024-10-09 03:18:13.682745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.407 [2024-10-09 03:18:13.682759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.407 [2024-10-09 03:18:13.682768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.407 [2024-10-09 03:18:13.682778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68992 len:8 PRP1 0x0 PRP2 0x0 00:15:45.407 [2024-10-09 03:18:13.682791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.407 [2024-10-09 03:18:13.682804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.407 [2024-10-09 03:18:13.682814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.407 [2024-10-09 03:18:13.682824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69000 len:8 PRP1 0x0 PRP2 0x0 00:15:45.407 [2024-10-09 03:18:13.682837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.407 [2024-10-09 03:18:13.682850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.407 [2024-10-09 03:18:13.682860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.407 [2024-10-09 03:18:13.682876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69008 len:8 PRP1 0x0 PRP2 0x0 00:15:45.407 [2024-10-09 03:18:13.682889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.407 [2024-10-09 03:18:13.682902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.407 [2024-10-09 03:18:13.682912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.407 [2024-10-09 03:18:13.682922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69016 len:8 PRP1 0x0 PRP2 0x0 00:15:45.407 [2024-10-09 03:18:13.682935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.407 [2024-10-09 03:18:13.682948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.407 [2024-10-09 03:18:13.682958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.407 [2024-10-09 03:18:13.682968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69024 len:8 PRP1 0x0 PRP2 0x0 00:15:45.407 [2024-10-09 03:18:13.682980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.407 [2024-10-09 03:18:13.682994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.407 [2024-10-09 03:18:13.683003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.407 [2024-10-09 03:18:13.683014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69032 len:8 PRP1 0x0 PRP2 0x0 00:15:45.407 [2024-10-09 03:18:13.683027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.407 [2024-10-09 03:18:13.683040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.407 [2024-10-09 03:18:13.683054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.407 [2024-10-09 03:18:13.683064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69040 len:8 PRP1 0x0 PRP2 0x0 00:15:45.407 [2024-10-09 03:18:13.683085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.407 [2024-10-09 03:18:13.683101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.407 [2024-10-09 03:18:13.683111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.407 [2024-10-09 03:18:13.683121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69048 len:8 PRP1 0x0 PRP2 0x0 00:15:45.407 [2024-10-09 03:18:13.683133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.407 [2024-10-09 03:18:13.683147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.407 [2024-10-09 03:18:13.683157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.407 [2024-10-09 03:18:13.683167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69056 len:8 PRP1 0x0 PRP2 0x0 00:15:45.407 [2024-10-09 03:18:13.683179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.407 [2024-10-09 03:18:13.683192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.407 [2024-10-09 03:18:13.683202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.407 [2024-10-09 03:18:13.683213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69064 len:8 PRP1 0x0 PRP2 0x0 00:15:45.407 [2024-10-09 03:18:13.683225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.407 [2024-10-09 03:18:13.683238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.407 [2024-10-09 03:18:13.683255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.407 [2024-10-09 03:18:13.683269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69072 len:8 PRP1 0x0 PRP2 0x0 00:15:45.407 [2024-10-09 03:18:13.683281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.407 [2024-10-09 03:18:13.683295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.407 [2024-10-09 03:18:13.683305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.407 [2024-10-09 03:18:13.683315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69080 len:8 PRP1 0x0 PRP2 0x0 00:15:45.407 [2024-10-09 03:18:13.683328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.407 [2024-10-09 03:18:13.683341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.407 [2024-10-09 03:18:13.683350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.407 [2024-10-09 03:18:13.683360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69088 len:8 PRP1 0x0 PRP2 0x0 00:15:45.407 [2024-10-09 03:18:13.683373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.407 [2024-10-09 03:18:13.683387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.407 [2024-10-09 03:18:13.683397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.407 [2024-10-09 03:18:13.683407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69096 len:8 PRP1 0x0 PRP2 0x0 00:15:45.407 [2024-10-09 03:18:13.683420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.407 [2024-10-09 03:18:13.683432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.407 [2024-10-09 03:18:13.683446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.407 [2024-10-09 03:18:13.683457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69104 len:8 PRP1 0x0 PRP2 0x0 00:15:45.407 [2024-10-09 03:18:13.683469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.407 [2024-10-09 03:18:13.683482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.407 [2024-10-09 03:18:13.683492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.407 [2024-10-09 03:18:13.683502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69112 len:8 PRP1 0x0 PRP2 0x0 00:15:45.407 [2024-10-09 03:18:13.683514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.407 [2024-10-09 03:18:13.683528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.407 [2024-10-09 03:18:13.683538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.407 [2024-10-09 03:18:13.683547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69120 len:8 PRP1 0x0 PRP2 0x0 00:15:45.407 [2024-10-09 03:18:13.683560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.407 [2024-10-09 03:18:13.683573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.407 [2024-10-09 03:18:13.683599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.407 [2024-10-09 03:18:13.683609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69128 len:8 PRP1 0x0 PRP2 0x0 00:15:45.408 [2024-10-09 03:18:13.683622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.408 [2024-10-09 03:18:13.683642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.408 [2024-10-09 03:18:13.683652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.408 [2024-10-09 03:18:13.683663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69136 len:8 PRP1 0x0 PRP2 0x0 00:15:45.408 [2024-10-09 03:18:13.683676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.408 [2024-10-09 03:18:13.683689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.408 [2024-10-09 03:18:13.683700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.408 [2024-10-09 03:18:13.683710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69144 len:8 PRP1 0x0 PRP2 0x0 00:15:45.408 [2024-10-09 03:18:13.683723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.408 [2024-10-09 03:18:13.683737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.408 [2024-10-09 03:18:13.683747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.408 [2024-10-09 03:18:13.683757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69152 len:8 PRP1 0x0 PRP2 0x0 00:15:45.408 [2024-10-09 03:18:13.683770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.408 [2024-10-09 03:18:13.683784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.408 [2024-10-09 03:18:13.683795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.408 [2024-10-09 03:18:13.683805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69160 len:8 PRP1 0x0 PRP2 0x0 00:15:45.408 [2024-10-09 03:18:13.683818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.408 [2024-10-09 03:18:13.683832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.408 [2024-10-09 03:18:13.683846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.408 [2024-10-09 03:18:13.683856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69168 len:8 PRP1 0x0 PRP2 0x0 00:15:45.408 [2024-10-09 03:18:13.683869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.408 [2024-10-09 03:18:13.683883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.408 [2024-10-09 03:18:13.683893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.408 [2024-10-09 03:18:13.683903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69176 len:8 PRP1 0x0 PRP2 0x0 00:15:45.408 [2024-10-09 03:18:13.683916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.408 [2024-10-09 03:18:13.683929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.408 [2024-10-09 03:18:13.683939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.408 [2024-10-09 03:18:13.683950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69184 len:8 PRP1 0x0 PRP2 0x0 00:15:45.408 [2024-10-09 03:18:13.683962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.408 [2024-10-09 03:18:13.683976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.408 [2024-10-09 03:18:13.683986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.408 [2024-10-09 03:18:13.684011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69192 len:8 PRP1 0x0 PRP2 0x0 00:15:45.408 [2024-10-09 03:18:13.684028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.408 [2024-10-09 03:18:13.684042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.408 [2024-10-09 03:18:13.684052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.408 [2024-10-09 03:18:13.684079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69200 len:8 PRP1 0x0 PRP2 0x0 00:15:45.408 [2024-10-09 03:18:13.684100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.408 [2024-10-09 03:18:13.684116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.408 [2024-10-09 03:18:13.684126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.408 [2024-10-09 03:18:13.684136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69208 len:8 PRP1 0x0 PRP2 0x0 00:15:45.408 [2024-10-09 03:18:13.684149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.408 [2024-10-09 03:18:13.684164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.408 [2024-10-09 03:18:13.684174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.408 [2024-10-09 03:18:13.684185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69216 len:8 PRP1 0x0 PRP2 0x0 00:15:45.408 [2024-10-09 03:18:13.684198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.408 [2024-10-09 03:18:13.684211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.408 [2024-10-09 03:18:13.684222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.408 [2024-10-09 03:18:13.684232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69224 len:8 PRP1 0x0 PRP2 0x0 00:15:45.408 [2024-10-09 03:18:13.684245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.408 [2024-10-09 03:18:13.684258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.408 [2024-10-09 03:18:13.684273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.408 [2024-10-09 03:18:13.684283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69232 len:8 PRP1 0x0 PRP2 0x0 00:15:45.408 [2024-10-09 03:18:13.684297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.408 [2024-10-09 03:18:13.684310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.408 [2024-10-09 03:18:13.684320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.408 [2024-10-09 03:18:13.684331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69240 len:8 PRP1 0x0 PRP2 0x0 00:15:45.408 [2024-10-09 03:18:13.684344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.408 [2024-10-09 03:18:13.684358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.408 [2024-10-09 03:18:13.684368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.408 [2024-10-09 03:18:13.684378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69248 len:8 PRP1 0x0 PRP2 0x0 00:15:45.408 [2024-10-09 03:18:13.684392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.408 [2024-10-09 03:18:13.684405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.408 [2024-10-09 03:18:13.684422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.408 [2024-10-09 03:18:13.684432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69256 len:8 PRP1 0x0 PRP2 0x0 00:15:45.408 [2024-10-09 03:18:13.684460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.408 [2024-10-09 03:18:13.684473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.408 [2024-10-09 03:18:13.684483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.408 [2024-10-09 03:18:13.684493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69264 len:8 PRP1 0x0 PRP2 0x0 00:15:45.408 [2024-10-09 03:18:13.684505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.408 [2024-10-09 03:18:13.684518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.408 [2024-10-09 03:18:13.684528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.408 [2024-10-09 03:18:13.684538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69272 len:8 PRP1 0x0 PRP2 0x0 00:15:45.408 [2024-10-09 03:18:13.684560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.408 [2024-10-09 03:18:13.684573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.408 [2024-10-09 03:18:13.684583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.408 [2024-10-09 03:18:13.684593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69280 len:8 PRP1 0x0 PRP2 0x0 00:15:45.408 [2024-10-09 03:18:13.684606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.408 [2024-10-09 03:18:13.684619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.408 [2024-10-09 03:18:13.684629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.408 [2024-10-09 03:18:13.684639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69288 len:8 PRP1 0x0 PRP2 0x0 00:15:45.408 [2024-10-09 03:18:13.684652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.408 [2024-10-09 03:18:13.684666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.408 [2024-10-09 03:18:13.684680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.408 [2024-10-09 03:18:13.684690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69296 len:8 PRP1 0x0 PRP2 0x0 00:15:45.408 [2024-10-09 03:18:13.684703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.408 [2024-10-09 03:18:13.684716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.408 [2024-10-09 03:18:13.684727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.408 [2024-10-09 03:18:13.684737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69304 len:8 PRP1 0x0 PRP2 0x0 00:15:45.408 [2024-10-09 03:18:13.684750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.408 [2024-10-09 03:18:13.684763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.408 [2024-10-09 03:18:13.684772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.408 [2024-10-09 03:18:13.684782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69312 len:8 PRP1 0x0 PRP2 0x0 00:15:45.408 [2024-10-09 03:18:13.684795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.408 [2024-10-09 03:18:13.684814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.408 [2024-10-09 03:18:13.684824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.409 [2024-10-09 03:18:13.684834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69320 len:8 PRP1 0x0 PRP2 0x0 00:15:45.409 [2024-10-09 03:18:13.684846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.409 [2024-10-09 03:18:13.684859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.409 [2024-10-09 03:18:13.684869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.409 [2024-10-09 03:18:13.684879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69328 len:8 PRP1 0x0 PRP2 0x0 00:15:45.409 [2024-10-09 03:18:13.684891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.409 [2024-10-09 03:18:13.684904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.409 [2024-10-09 03:18:13.684914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.409 [2024-10-09 03:18:13.684924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69336 len:8 PRP1 0x0 PRP2 0x0 00:15:45.409 [2024-10-09 03:18:13.684958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.409 [2024-10-09 03:18:13.684972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.409 [2024-10-09 03:18:13.684982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.409 [2024-10-09 03:18:13.684992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69344 len:8 PRP1 0x0 PRP2 0x0 00:15:45.409 [2024-10-09 03:18:13.685005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.409 [2024-10-09 03:18:13.685019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.409 [2024-10-09 03:18:13.685029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.409 [2024-10-09 03:18:13.685039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69352 len:8 PRP1 0x0 PRP2 0x0 00:15:45.409 [2024-10-09 03:18:13.685052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.409 [2024-10-09 03:18:13.685065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.409 [2024-10-09 03:18:13.685089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.409 [2024-10-09 03:18:13.685102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69360 len:8 PRP1 0x0 PRP2 0x0 00:15:45.409 [2024-10-09 03:18:13.685115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.409 [2024-10-09 03:18:13.685129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.409 [2024-10-09 03:18:13.685139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.409 [2024-10-09 03:18:13.685149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69368 len:8 PRP1 0x0 PRP2 0x0 00:15:45.409 [2024-10-09 03:18:13.685163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.409 [2024-10-09 03:18:13.685176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.409 [2024-10-09 03:18:13.685186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.409 [2024-10-09 03:18:13.685197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69376 len:8 PRP1 0x0 PRP2 0x0 00:15:45.409 [2024-10-09 03:18:13.685216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.409 [2024-10-09 03:18:13.685230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.409 [2024-10-09 03:18:13.685240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.409 [2024-10-09 03:18:13.685250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69384 len:8 PRP1 0x0 PRP2 0x0 00:15:45.409 [2024-10-09 03:18:13.685263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.409 [2024-10-09 03:18:13.685277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.409 [2024-10-09 03:18:13.685287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.409 [2024-10-09 03:18:13.685297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69392 len:8 PRP1 0x0 PRP2 0x0 00:15:45.409 [2024-10-09 03:18:13.685310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.409 [2024-10-09 03:18:13.685323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.409 [2024-10-09 03:18:13.685348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.409 [2024-10-09 03:18:13.685358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69400 len:8 PRP1 0x0 PRP2 0x0 00:15:45.409 [2024-10-09 03:18:13.685375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.409 [2024-10-09 03:18:13.685388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.409 [2024-10-09 03:18:13.685398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.409 [2024-10-09 03:18:13.685408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69408 len:8 PRP1 0x0 PRP2 0x0 00:15:45.409 [2024-10-09 03:18:13.685420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.409 [2024-10-09 03:18:13.685434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.409 [2024-10-09 03:18:13.685444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.409 [2024-10-09 03:18:13.685453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69416 len:8 PRP1 0x0 PRP2 0x0 00:15:45.409 [2024-10-09 03:18:13.685466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.409 [2024-10-09 03:18:13.685479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.409 [2024-10-09 03:18:13.685494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.409 [2024-10-09 03:18:13.685504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69424 len:8 PRP1 0x0 PRP2 0x0 00:15:45.409 [2024-10-09 03:18:13.685517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.409 [2024-10-09 03:18:13.685531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.409 [2024-10-09 03:18:13.685540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.409 [2024-10-09 03:18:13.685550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69432 len:8 PRP1 0x0 PRP2 0x0 00:15:45.409 [2024-10-09 03:18:13.685563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.409 [2024-10-09 03:18:13.685576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.409 [2024-10-09 03:18:13.685586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.409 [2024-10-09 03:18:13.685601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69440 len:8 PRP1 0x0 PRP2 0x0 00:15:45.409 [2024-10-09 03:18:13.685614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.409 [2024-10-09 03:18:13.685627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.409 [2024-10-09 03:18:13.685637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.409 [2024-10-09 03:18:13.685647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69448 len:8 PRP1 0x0 PRP2 0x0 00:15:45.409 [2024-10-09 03:18:13.685659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.409 [2024-10-09 03:18:13.685672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.409 [2024-10-09 03:18:13.685682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.409 [2024-10-09 03:18:13.685692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69456 len:8 PRP1 0x0 PRP2 0x0 00:15:45.409 [2024-10-09 03:18:13.685705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.409 [2024-10-09 03:18:13.685718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.409 [2024-10-09 03:18:13.685727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.409 [2024-10-09 03:18:13.685737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69464 len:8 PRP1 0x0 PRP2 0x0 00:15:45.409 [2024-10-09 03:18:13.685755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.409 [2024-10-09 03:18:13.685769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.409 [2024-10-09 03:18:13.685779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.409 [2024-10-09 03:18:13.685788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69472 len:8 PRP1 0x0 PRP2 0x0 00:15:45.409 [2024-10-09 03:18:13.685801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.409 [2024-10-09 03:18:13.685814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.409 [2024-10-09 03:18:13.685824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.409 [2024-10-09 03:18:13.685834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69480 len:8 PRP1 0x0 PRP2 0x0 00:15:45.409 [2024-10-09 03:18:13.685846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.409 [2024-10-09 03:18:13.685860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.409 [2024-10-09 03:18:13.685887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.409 [2024-10-09 03:18:13.685899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69488 len:8 PRP1 0x0 PRP2 0x0 00:15:45.409 [2024-10-09 03:18:13.697281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.409 [2024-10-09 03:18:13.697321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.409 [2024-10-09 03:18:13.697334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.409 [2024-10-09 03:18:13.697345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69496 len:8 PRP1 0x0 PRP2 0x0 00:15:45.409 [2024-10-09 03:18:13.697360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.409 [2024-10-09 03:18:13.697402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.409 [2024-10-09 03:18:13.697414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.409 [2024-10-09 03:18:13.697424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69504 len:8 PRP1 0x0 PRP2 0x0 00:15:45.409 [2024-10-09 03:18:13.697437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.409 [2024-10-09 03:18:13.697451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.409 [2024-10-09 03:18:13.697475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.409 [2024-10-09 03:18:13.697485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69512 len:8 PRP1 0x0 PRP2 0x0 00:15:45.409 [2024-10-09 03:18:13.697498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.410 [2024-10-09 03:18:13.697511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.410 [2024-10-09 03:18:13.697520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.410 [2024-10-09 03:18:13.697530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69520 len:8 PRP1 0x0 PRP2 0x0 00:15:45.410 [2024-10-09 03:18:13.697543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.410 [2024-10-09 03:18:13.697556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.410 [2024-10-09 03:18:13.697565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.410 [2024-10-09 03:18:13.697575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69528 len:8 PRP1 0x0 PRP2 0x0 00:15:45.410 [2024-10-09 03:18:13.697589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.410 [2024-10-09 03:18:13.697602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.410 [2024-10-09 03:18:13.697611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.410 [2024-10-09 03:18:13.697621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69536 len:8 PRP1 0x0 PRP2 0x0 00:15:45.410 [2024-10-09 03:18:13.697633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.410 [2024-10-09 03:18:13.697646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.410 [2024-10-09 03:18:13.697656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.410 [2024-10-09 03:18:13.697665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69544 len:8 PRP1 0x0 PRP2 0x0 00:15:45.410 [2024-10-09 03:18:13.697678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.410 [2024-10-09 03:18:13.697690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.410 [2024-10-09 03:18:13.697700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.410 [2024-10-09 03:18:13.697710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69552 len:8 PRP1 0x0 PRP2 0x0 00:15:45.410 [2024-10-09 03:18:13.697723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.410 [2024-10-09 03:18:13.697736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.410 [2024-10-09 03:18:13.697745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.410 [2024-10-09 03:18:13.697755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69560 len:8 PRP1 0x0 PRP2 0x0 00:15:45.410 [2024-10-09 03:18:13.697767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.410 [2024-10-09 03:18:13.697786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.410 [2024-10-09 03:18:13.697796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.410 [2024-10-09 03:18:13.697806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69568 len:8 PRP1 0x0 PRP2 0x0 00:15:45.410 [2024-10-09 03:18:13.697818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.410 [2024-10-09 03:18:13.697831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.410 [2024-10-09 03:18:13.697841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.410 [2024-10-09 03:18:13.697850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69576 len:8 PRP1 0x0 PRP2 0x0 00:15:45.410 [2024-10-09 03:18:13.697877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.410 [2024-10-09 03:18:13.697890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.410 [2024-10-09 03:18:13.697899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.410 [2024-10-09 03:18:13.697908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69584 len:8 PRP1 0x0 PRP2 0x0 00:15:45.410 [2024-10-09 03:18:13.697920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.410 [2024-10-09 03:18:13.697933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.410 [2024-10-09 03:18:13.697941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.410 [2024-10-09 03:18:13.697951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69592 len:8 PRP1 0x0 PRP2 0x0 00:15:45.410 [2024-10-09 03:18:13.697963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.410 [2024-10-09 03:18:13.697975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.410 [2024-10-09 03:18:13.697984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.410 [2024-10-09 03:18:13.697993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69600 len:8 PRP1 0x0 PRP2 0x0 00:15:45.410 [2024-10-09 03:18:13.698036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.410 [2024-10-09 03:18:13.698051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.410 [2024-10-09 03:18:13.698060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.410 [2024-10-09 03:18:13.698086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69608 len:8 PRP1 0x0 PRP2 0x0 00:15:45.410 [2024-10-09 03:18:13.698100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.410 [2024-10-09 03:18:13.698114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.410 [2024-10-09 03:18:13.698124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.410 [2024-10-09 03:18:13.698134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69616 len:8 PRP1 0x0 PRP2 0x0 00:15:45.410 [2024-10-09 03:18:13.698148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.410 [2024-10-09 03:18:13.698161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.410 [2024-10-09 03:18:13.698171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.410 [2024-10-09 03:18:13.698189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69624 len:8 PRP1 0x0 PRP2 0x0 00:15:45.410 [2024-10-09 03:18:13.698203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.410 [2024-10-09 03:18:13.698217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.410 [2024-10-09 03:18:13.698227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.410 [2024-10-09 03:18:13.698237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69632 len:8 PRP1 0x0 PRP2 0x0 00:15:45.410 [2024-10-09 03:18:13.698251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.410 [2024-10-09 03:18:13.698264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.410 [2024-10-09 03:18:13.698274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.410 [2024-10-09 03:18:13.698284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69640 len:8 PRP1 0x0 PRP2 0x0 00:15:45.410 [2024-10-09 03:18:13.698298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.410 [2024-10-09 03:18:13.698311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.410 [2024-10-09 03:18:13.698321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.410 [2024-10-09 03:18:13.698331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69648 len:8 PRP1 0x0 PRP2 0x0 00:15:45.410 [2024-10-09 03:18:13.698360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.410 [2024-10-09 03:18:13.698391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.410 [2024-10-09 03:18:13.698410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.410 [2024-10-09 03:18:13.698424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69656 len:8 PRP1 0x0 PRP2 0x0 00:15:45.410 [2024-10-09 03:18:13.698441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.410 [2024-10-09 03:18:13.698459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.410 [2024-10-09 03:18:13.698471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.410 [2024-10-09 03:18:13.698485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69664 len:8 PRP1 0x0 PRP2 0x0 00:15:45.410 [2024-10-09 03:18:13.698502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.410 [2024-10-09 03:18:13.698519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.410 [2024-10-09 03:18:13.698532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.410 [2024-10-09 03:18:13.698545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69672 len:8 PRP1 0x0 PRP2 0x0 00:15:45.410 [2024-10-09 03:18:13.698563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.410 [2024-10-09 03:18:13.698581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.410 [2024-10-09 03:18:13.698594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.410 [2024-10-09 03:18:13.698607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69680 len:8 PRP1 0x0 PRP2 0x0 00:15:45.410 [2024-10-09 03:18:13.698625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.410 [2024-10-09 03:18:13.698642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.410 [2024-10-09 03:18:13.698663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.411 [2024-10-09 03:18:13.698677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69688 len:8 PRP1 0x0 PRP2 0x0 00:15:45.411 [2024-10-09 03:18:13.698694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.411 [2024-10-09 03:18:13.698712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.411 [2024-10-09 03:18:13.698725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.411 [2024-10-09 03:18:13.698738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69696 len:8 PRP1 0x0 PRP2 0x0 00:15:45.411 [2024-10-09 03:18:13.698756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.411 [2024-10-09 03:18:13.698774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.411 [2024-10-09 03:18:13.698786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.411 [2024-10-09 03:18:13.698799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69704 len:8 PRP1 0x0 PRP2 0x0 00:15:45.411 [2024-10-09 03:18:13.698817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.411 [2024-10-09 03:18:13.698834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.411 [2024-10-09 03:18:13.698847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.411 [2024-10-09 03:18:13.698860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69712 len:8 PRP1 0x0 PRP2 0x0 00:15:45.411 [2024-10-09 03:18:13.698877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.411 [2024-10-09 03:18:13.698896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.411 [2024-10-09 03:18:13.698908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.411 [2024-10-09 03:18:13.698922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69720 len:8 PRP1 0x0 PRP2 0x0 00:15:45.411 [2024-10-09 03:18:13.698939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.411 [2024-10-09 03:18:13.698957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.411 [2024-10-09 03:18:13.698969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.411 [2024-10-09 03:18:13.698982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69728 len:8 PRP1 0x0 PRP2 0x0 00:15:45.411 [2024-10-09 03:18:13.699000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.411 [2024-10-09 03:18:13.699017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.411 [2024-10-09 03:18:13.699030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.411 [2024-10-09 03:18:13.699043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69736 len:8 PRP1 0x0 PRP2 0x0 00:15:45.411 [2024-10-09 03:18:13.699069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.411 [2024-10-09 03:18:13.699110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.411 [2024-10-09 03:18:13.699125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.411 [2024-10-09 03:18:13.699138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69744 len:8 PRP1 0x0 PRP2 0x0 00:15:45.411 [2024-10-09 03:18:13.699155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.411 [2024-10-09 03:18:13.699182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.411 [2024-10-09 03:18:13.699195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.411 [2024-10-09 03:18:13.699208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69752 len:8 PRP1 0x0 PRP2 0x0 00:15:45.411 [2024-10-09 03:18:13.699227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.411 [2024-10-09 03:18:13.699245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.411 [2024-10-09 03:18:13.699258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.411 [2024-10-09 03:18:13.699271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69760 len:8 PRP1 0x0 PRP2 0x0 00:15:45.411 [2024-10-09 03:18:13.699288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.411 [2024-10-09 03:18:13.699306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.411 [2024-10-09 03:18:13.699319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.411 [2024-10-09 03:18:13.699332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69768 len:8 PRP1 0x0 PRP2 0x0 00:15:45.411 [2024-10-09 03:18:13.699350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.411 [2024-10-09 03:18:13.699368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.411 [2024-10-09 03:18:13.699381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.411 [2024-10-09 03:18:13.699404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68776 len:8 PRP1 0x0 PRP2 0x0 00:15:45.411 [2024-10-09 03:18:13.699421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.411 [2024-10-09 03:18:13.699448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.411 [2024-10-09 03:18:13.699461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.411 [2024-10-09 03:18:13.699474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68784 len:8 PRP1 0x0 PRP2 0x0 00:15:45.411 [2024-10-09 03:18:13.699492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.411 [2024-10-09 03:18:13.699510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.411 [2024-10-09 03:18:13.699523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.411 [2024-10-09 03:18:13.699536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68792 len:8 PRP1 0x0 PRP2 0x0 00:15:45.411 [2024-10-09 03:18:13.699553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.411 [2024-10-09 03:18:13.699571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.411 [2024-10-09 03:18:13.699590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.411 [2024-10-09 03:18:13.699603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68800 len:8 PRP1 0x0 PRP2 0x0 00:15:45.411 [2024-10-09 03:18:13.699621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.411 [2024-10-09 03:18:13.699639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.411 [2024-10-09 03:18:13.699652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.411 [2024-10-09 03:18:13.699666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68808 len:8 PRP1 0x0 PRP2 0x0 00:15:45.411 [2024-10-09 03:18:13.699690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.411 [2024-10-09 03:18:13.699709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.411 [2024-10-09 03:18:13.699721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.411 [2024-10-09 03:18:13.699735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68816 len:8 PRP1 0x0 PRP2 0x0 00:15:45.411 [2024-10-09 03:18:13.699752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.411 [2024-10-09 03:18:13.699770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.411 [2024-10-09 03:18:13.699783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.411 [2024-10-09 03:18:13.699796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68824 len:8 PRP1 0x0 PRP2 0x0 00:15:45.411 [2024-10-09 03:18:13.699813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.411 [2024-10-09 03:18:13.699884] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1652660 was disconnected and freed. reset controller. 00:15:45.411 [2024-10-09 03:18:13.699906] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:15:45.411 [2024-10-09 03:18:13.699979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.411 [2024-10-09 03:18:13.700017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.411 [2024-10-09 03:18:13.700038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.411 [2024-10-09 03:18:13.700097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.411 [2024-10-09 03:18:13.700118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.411 [2024-10-09 03:18:13.700135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.411 [2024-10-09 03:18:13.700162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.411 [2024-10-09 03:18:13.700180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.411 [2024-10-09 03:18:13.700198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:45.411 [2024-10-09 03:18:13.700254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e42e0 (9): Bad file descriptor 00:15:45.411 [2024-10-09 03:18:13.705457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:45.411 [2024-10-09 03:18:13.746879] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:45.411 8000.50 IOPS, 31.25 MiB/s [2024-10-09T03:18:28.714Z] 8533.67 IOPS, 33.33 MiB/s [2024-10-09T03:18:28.714Z] 8840.25 IOPS, 34.53 MiB/s [2024-10-09T03:18:28.714Z] [2024-10-09 03:18:17.354567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.411 [2024-10-09 03:18:17.354637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.411 [2024-10-09 03:18:17.354693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.411 [2024-10-09 03:18:17.354709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.411 [2024-10-09 03:18:17.354741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.411 [2024-10-09 03:18:17.354766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.411 [2024-10-09 03:18:17.354783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.411 [2024-10-09 03:18:17.354797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.411 [2024-10-09 03:18:17.354811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.412 [2024-10-09 03:18:17.354825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.354840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.412 [2024-10-09 03:18:17.354853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.354868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.412 [2024-10-09 03:18:17.354881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.354896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.412 [2024-10-09 03:18:17.354910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.354925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.412 [2024-10-09 03:18:17.354939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.354953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.412 [2024-10-09 03:18:17.354967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.354982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.412 [2024-10-09 03:18:17.354995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.355010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.412 [2024-10-09 03:18:17.355024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.355039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.412 [2024-10-09 03:18:17.355052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.355079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.412 [2024-10-09 03:18:17.355095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.355110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.412 [2024-10-09 03:18:17.355123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.355148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.412 [2024-10-09 03:18:17.355163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.355178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.412 [2024-10-09 03:18:17.355192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.355208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.412 [2024-10-09 03:18:17.355222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.355237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.412 [2024-10-09 03:18:17.355250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.355265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.412 [2024-10-09 03:18:17.355279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.355294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.412 [2024-10-09 03:18:17.355307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.355322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.412 [2024-10-09 03:18:17.355336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.355350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.412 [2024-10-09 03:18:17.355364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.355395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.412 [2024-10-09 03:18:17.355410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.355425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.412 [2024-10-09 03:18:17.355440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.355455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.412 [2024-10-09 03:18:17.355469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.355485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.412 [2024-10-09 03:18:17.355499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.355515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.412 [2024-10-09 03:18:17.355537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.355554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.412 [2024-10-09 03:18:17.355569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.355602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.412 [2024-10-09 03:18:17.355617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.355633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.412 [2024-10-09 03:18:17.355648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.355664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.412 [2024-10-09 03:18:17.355680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.355697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.412 [2024-10-09 03:18:17.355712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.355729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.412 [2024-10-09 03:18:17.355744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.355760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.412 [2024-10-09 03:18:17.355774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.355791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.412 [2024-10-09 03:18:17.355805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.355821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.412 [2024-10-09 03:18:17.355837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.355853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.412 [2024-10-09 03:18:17.355868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.355899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.412 [2024-10-09 03:18:17.355913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.355929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.412 [2024-10-09 03:18:17.355943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.355981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.412 [2024-10-09 03:18:17.355996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.356011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.412 [2024-10-09 03:18:17.356025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.356040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.412 [2024-10-09 03:18:17.356053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.356085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.412 [2024-10-09 03:18:17.356099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.356115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.412 [2024-10-09 03:18:17.356142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.412 [2024-10-09 03:18:17.356159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.412 [2024-10-09 03:18:17.356174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.356190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.413 [2024-10-09 03:18:17.356205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.356221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.413 [2024-10-09 03:18:17.356235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.356251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.413 [2024-10-09 03:18:17.356265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.356282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.413 [2024-10-09 03:18:17.356297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.356313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.413 [2024-10-09 03:18:17.356327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.356343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.413 [2024-10-09 03:18:17.356358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.356374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.413 [2024-10-09 03:18:17.356397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.356414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.413 [2024-10-09 03:18:17.356429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.356445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.413 [2024-10-09 03:18:17.356459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.356475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.413 [2024-10-09 03:18:17.356490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.356536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.413 [2024-10-09 03:18:17.356550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.356564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.413 [2024-10-09 03:18:17.356578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.356593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.413 [2024-10-09 03:18:17.356607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.356622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.413 [2024-10-09 03:18:17.356635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.356650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.413 [2024-10-09 03:18:17.356664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.356679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.413 [2024-10-09 03:18:17.356693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.356708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.413 [2024-10-09 03:18:17.356722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.356737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.413 [2024-10-09 03:18:17.356750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.356765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:94328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.413 [2024-10-09 03:18:17.356778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.356794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.413 [2024-10-09 03:18:17.356814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.356830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.413 [2024-10-09 03:18:17.356844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.356859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.413 [2024-10-09 03:18:17.356873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.356888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.413 [2024-10-09 03:18:17.356902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.356917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.413 [2024-10-09 03:18:17.356931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.356946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:94376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.413 [2024-10-09 03:18:17.356959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.356974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.413 [2024-10-09 03:18:17.356987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.357003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.413 [2024-10-09 03:18:17.357016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.357031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.413 [2024-10-09 03:18:17.357044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.357075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.413 [2024-10-09 03:18:17.357089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.357114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.413 [2024-10-09 03:18:17.357128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.357159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.413 [2024-10-09 03:18:17.357174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.357190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.413 [2024-10-09 03:18:17.357204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.357232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.413 [2024-10-09 03:18:17.357248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.357264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.413 [2024-10-09 03:18:17.357279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.357294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.413 [2024-10-09 03:18:17.357309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.357332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.413 [2024-10-09 03:18:17.357347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.357363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.413 [2024-10-09 03:18:17.357378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.357394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.413 [2024-10-09 03:18:17.357409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.357425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.413 [2024-10-09 03:18:17.357440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.357456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.413 [2024-10-09 03:18:17.357470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.357486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.413 [2024-10-09 03:18:17.357516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.413 [2024-10-09 03:18:17.357546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.413 [2024-10-09 03:18:17.357560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.414 [2024-10-09 03:18:17.357592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.414 [2024-10-09 03:18:17.357607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.414 [2024-10-09 03:18:17.357622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.414 [2024-10-09 03:18:17.357636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.414 [2024-10-09 03:18:17.357652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.414 [2024-10-09 03:18:17.357673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.414 [2024-10-09 03:18:17.357689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.414 [2024-10-09 03:18:17.357704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.414 [2024-10-09 03:18:17.357720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.414 [2024-10-09 03:18:17.357734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.414 [2024-10-09 03:18:17.357749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.414 [2024-10-09 03:18:17.357763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.414 [2024-10-09 03:18:17.357778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.414 [2024-10-09 03:18:17.357792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.414 [2024-10-09 03:18:17.357808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.414 [2024-10-09 03:18:17.357822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.414 [2024-10-09 03:18:17.357837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.414 [2024-10-09 03:18:17.357851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.414 [2024-10-09 03:18:17.357871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.414 [2024-10-09 03:18:17.357885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.414 [2024-10-09 03:18:17.357901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.414 [2024-10-09 03:18:17.357915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.414 [2024-10-09 03:18:17.357930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.414 [2024-10-09 03:18:17.357944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.414 [2024-10-09 03:18:17.357959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.414 [2024-10-09 03:18:17.357974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.414 [2024-10-09 03:18:17.357990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.414 [2024-10-09 03:18:17.358031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.414 [2024-10-09 03:18:17.358049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.414 [2024-10-09 03:18:17.358074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.414 [2024-10-09 03:18:17.358114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.414 [2024-10-09 03:18:17.358131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.414 [2024-10-09 03:18:17.358147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.414 [2024-10-09 03:18:17.358161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.414 [2024-10-09 03:18:17.358178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.414 [2024-10-09 03:18:17.358192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.414 [2024-10-09 03:18:17.358208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.414 [2024-10-09 03:18:17.358233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.414 [2024-10-09 03:18:17.358249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.414 [2024-10-09 03:18:17.358264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.414 [2024-10-09 03:18:17.358280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.414 [2024-10-09 03:18:17.358294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.414 [2024-10-09 03:18:17.358310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:94560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.414 [2024-10-09 03:18:17.358324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.414 [2024-10-09 03:18:17.358340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.414 [2024-10-09 03:18:17.358355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.414 [2024-10-09 03:18:17.358370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1656850 is same with the state(6) to be set 00:15:45.414 [2024-10-09 03:18:17.358387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.414 [2024-10-09 03:18:17.358398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.414 [2024-10-09 03:18:17.358409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94576 len:8 PRP1 0x0 PRP2 0x0 00:15:45.414 [2024-10-09 03:18:17.358429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.414 [2024-10-09 03:18:17.358450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.414 [2024-10-09 03:18:17.358460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.414 [2024-10-09 03:18:17.358471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95032 len:8 PRP1 0x0 PRP2 0x0 00:15:45.414 [2024-10-09 03:18:17.358485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.414 [2024-10-09 03:18:17.358499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.414 [2024-10-09 03:18:17.358509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.414 [2024-10-09 03:18:17.358535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95040 len:8 PRP1 0x0 PRP2 0x0 00:15:45.414 [2024-10-09 03:18:17.358555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.414 [2024-10-09 03:18:17.358570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.414 [2024-10-09 03:18:17.358580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.414 [2024-10-09 03:18:17.358591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95048 len:8 PRP1 0x0 PRP2 0x0 00:15:45.414 [2024-10-09 03:18:17.358604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.414 [2024-10-09 03:18:17.358617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.414 [2024-10-09 03:18:17.358627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.414 [2024-10-09 03:18:17.358638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95056 len:8 PRP1 0x0 PRP2 0x0 00:15:45.414 [2024-10-09 03:18:17.358651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.414 [2024-10-09 03:18:17.358664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.414 [2024-10-09 03:18:17.358674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.414 [2024-10-09 03:18:17.358684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95064 len:8 PRP1 0x0 PRP2 0x0 00:15:45.414 [2024-10-09 03:18:17.358697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.414 [2024-10-09 03:18:17.358711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.414 [2024-10-09 03:18:17.358721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.414 [2024-10-09 03:18:17.358738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95072 len:8 PRP1 0x0 PRP2 0x0 00:15:45.414 [2024-10-09 03:18:17.358752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.414 [2024-10-09 03:18:17.358765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.414 [2024-10-09 03:18:17.358775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.414 [2024-10-09 03:18:17.358786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95080 len:8 PRP1 0x0 PRP2 0x0 00:15:45.414 [2024-10-09 03:18:17.358799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.414 [2024-10-09 03:18:17.358813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.414 [2024-10-09 03:18:17.358823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.415 [2024-10-09 03:18:17.358833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95088 len:8 PRP1 0x0 PRP2 0x0 00:15:45.415 [2024-10-09 03:18:17.358847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.415 [2024-10-09 03:18:17.358861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.415 [2024-10-09 03:18:17.358871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.415 [2024-10-09 03:18:17.358881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95096 len:8 PRP1 0x0 PRP2 0x0 00:15:45.415 [2024-10-09 03:18:17.358894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.415 [2024-10-09 03:18:17.358908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.415 [2024-10-09 03:18:17.358923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.415 [2024-10-09 03:18:17.358934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95104 len:8 PRP1 0x0 PRP2 0x0 00:15:45.415 [2024-10-09 03:18:17.358947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.415 [2024-10-09 03:18:17.358961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.415 [2024-10-09 03:18:17.358971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.415 [2024-10-09 03:18:17.358981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95112 len:8 PRP1 0x0 PRP2 0x0 00:15:45.415 [2024-10-09 03:18:17.358994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.415 [2024-10-09 03:18:17.359007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.415 [2024-10-09 03:18:17.359017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.415 [2024-10-09 03:18:17.359027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95120 len:8 PRP1 0x0 PRP2 0x0 00:15:45.415 [2024-10-09 03:18:17.359040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.415 [2024-10-09 03:18:17.359053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.415 [2024-10-09 03:18:17.359091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.415 [2024-10-09 03:18:17.359103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95128 len:8 PRP1 0x0 PRP2 0x0 00:15:45.415 [2024-10-09 03:18:17.359117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.415 [2024-10-09 03:18:17.359131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.415 [2024-10-09 03:18:17.359141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.415 [2024-10-09 03:18:17.359157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95136 len:8 PRP1 0x0 PRP2 0x0 00:15:45.415 [2024-10-09 03:18:17.359171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.415 [2024-10-09 03:18:17.359185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.415 [2024-10-09 03:18:17.359196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.415 [2024-10-09 03:18:17.359206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95144 len:8 PRP1 0x0 PRP2 0x0 00:15:45.415 [2024-10-09 03:18:17.359219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.415 [2024-10-09 03:18:17.359233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.415 [2024-10-09 03:18:17.359244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.415 [2024-10-09 03:18:17.359254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95152 len:8 PRP1 0x0 PRP2 0x0 00:15:45.415 [2024-10-09 03:18:17.359268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.415 [2024-10-09 03:18:17.359326] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1656850 was disconnected and freed. reset controller. 00:15:45.415 [2024-10-09 03:18:17.359343] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:15:45.415 [2024-10-09 03:18:17.359398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.415 [2024-10-09 03:18:17.359420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.415 [2024-10-09 03:18:17.359447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.415 [2024-10-09 03:18:17.359462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.415 [2024-10-09 03:18:17.359493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.415 [2024-10-09 03:18:17.359506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.415 [2024-10-09 03:18:17.359522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.415 [2024-10-09 03:18:17.359536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.415 [2024-10-09 03:18:17.359549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:45.415 [2024-10-09 03:18:17.359614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e42e0 (9): Bad file descriptor 00:15:45.415 [2024-10-09 03:18:17.363512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:45.415 [2024-10-09 03:18:17.403692] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:45.415 8878.20 IOPS, 34.68 MiB/s [2024-10-09T03:18:28.718Z] 8974.50 IOPS, 35.06 MiB/s [2024-10-09T03:18:28.718Z] 8899.29 IOPS, 34.76 MiB/s [2024-10-09T03:18:28.718Z] 8796.88 IOPS, 34.36 MiB/s [2024-10-09T03:18:28.718Z] 8855.00 IOPS, 34.59 MiB/s [2024-10-09T03:18:28.718Z] [2024-10-09 03:18:21.968161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:36456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.415 [2024-10-09 03:18:21.968232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.415 [2024-10-09 03:18:21.968264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:36464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.415 [2024-10-09 03:18:21.968282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.415 [2024-10-09 03:18:21.968299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:36472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.415 [2024-10-09 03:18:21.968313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.415 [2024-10-09 03:18:21.968330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:36480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.415 [2024-10-09 03:18:21.968344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.415 [2024-10-09 03:18:21.968361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:36488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.415 [2024-10-09 03:18:21.968376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.415 [2024-10-09 03:18:21.968392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:36496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.415 [2024-10-09 03:18:21.968406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.415 [2024-10-09 03:18:21.968422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:36504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.415 [2024-10-09 03:18:21.968437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.415 [2024-10-09 03:18:21.968453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:36512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.415 [2024-10-09 03:18:21.968491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.415 [2024-10-09 03:18:21.968509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:35880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.415 [2024-10-09 03:18:21.968524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.415 [2024-10-09 03:18:21.968540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:35888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.415 [2024-10-09 03:18:21.968555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.415 [2024-10-09 03:18:21.968571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:35896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.415 [2024-10-09 03:18:21.968585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.415 [2024-10-09 03:18:21.968601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.415 [2024-10-09 03:18:21.968615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.415 [2024-10-09 03:18:21.968631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:35912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.415 [2024-10-09 03:18:21.968645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.415 [2024-10-09 03:18:21.968661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:35920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.415 [2024-10-09 03:18:21.968675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.415 [2024-10-09 03:18:21.968691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:35928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.415 [2024-10-09 03:18:21.968705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.415 [2024-10-09 03:18:21.968721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:35936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.415 [2024-10-09 03:18:21.968735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.415 [2024-10-09 03:18:21.968751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.415 [2024-10-09 03:18:21.968765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.415 [2024-10-09 03:18:21.968783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:35952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.415 [2024-10-09 03:18:21.968798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.415 [2024-10-09 03:18:21.968814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:35960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.415 [2024-10-09 03:18:21.968828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.415 [2024-10-09 03:18:21.968844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:35968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.415 [2024-10-09 03:18:21.968858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.968882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:35976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.416 [2024-10-09 03:18:21.968898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.968914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:35984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.416 [2024-10-09 03:18:21.968929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.968945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:35992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.416 [2024-10-09 03:18:21.968959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.968975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:36000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.416 [2024-10-09 03:18:21.968989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.969005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:36520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.416 [2024-10-09 03:18:21.969019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.969035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:36528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.416 [2024-10-09 03:18:21.969063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.969082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:36536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.416 [2024-10-09 03:18:21.969097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.969112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:36544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.416 [2024-10-09 03:18:21.969126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.969142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:36552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.416 [2024-10-09 03:18:21.969157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.969172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:36560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.416 [2024-10-09 03:18:21.969187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.969203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.416 [2024-10-09 03:18:21.969217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.969233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:36576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.416 [2024-10-09 03:18:21.969247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.969263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.416 [2024-10-09 03:18:21.969285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.969303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:36016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.416 [2024-10-09 03:18:21.969318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.969334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:36024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.416 [2024-10-09 03:18:21.969349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.969365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:36032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.416 [2024-10-09 03:18:21.969380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.969396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:36040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.416 [2024-10-09 03:18:21.969410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.969426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:36048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.416 [2024-10-09 03:18:21.969440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.969456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:36056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.416 [2024-10-09 03:18:21.969470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.969487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:36064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.416 [2024-10-09 03:18:21.969501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.969517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:36072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.416 [2024-10-09 03:18:21.969531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.969547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.416 [2024-10-09 03:18:21.969561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.969577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:36088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.416 [2024-10-09 03:18:21.969591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.969607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.416 [2024-10-09 03:18:21.969622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.969637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:36104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.416 [2024-10-09 03:18:21.969651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.969675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:36112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.416 [2024-10-09 03:18:21.969690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.969706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.416 [2024-10-09 03:18:21.969720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.969736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.416 [2024-10-09 03:18:21.969750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.969766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:36136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.416 [2024-10-09 03:18:21.969780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.969797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.416 [2024-10-09 03:18:21.969811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.969828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:36152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.416 [2024-10-09 03:18:21.969842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.969858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:36160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.416 [2024-10-09 03:18:21.969872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.969888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.416 [2024-10-09 03:18:21.969903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.969919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:36176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.416 [2024-10-09 03:18:21.969933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.969950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:36184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.416 [2024-10-09 03:18:21.969963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.969979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.416 [2024-10-09 03:18:21.969993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.970019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.416 [2024-10-09 03:18:21.970035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.970062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.416 [2024-10-09 03:18:21.970087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.970104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:36600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.416 [2024-10-09 03:18:21.970118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.970135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:36608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.416 [2024-10-09 03:18:21.970149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.970165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:36616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.416 [2024-10-09 03:18:21.970179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.970195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.416 [2024-10-09 03:18:21.970210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.416 [2024-10-09 03:18:21.970225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:36632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.417 [2024-10-09 03:18:21.970240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.970256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:36640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.417 [2024-10-09 03:18:21.970270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.970286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:36648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.417 [2024-10-09 03:18:21.970301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.970317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:36656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.417 [2024-10-09 03:18:21.970341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.970357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:36664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.417 [2024-10-09 03:18:21.970372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.970388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:36672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.417 [2024-10-09 03:18:21.970403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.970420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:36680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.417 [2024-10-09 03:18:21.970438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.970454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:36688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.417 [2024-10-09 03:18:21.970468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.970484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:36696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.417 [2024-10-09 03:18:21.970505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.970522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:36704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.417 [2024-10-09 03:18:21.970536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.970553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.417 [2024-10-09 03:18:21.970567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.970583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.417 [2024-10-09 03:18:21.970597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.970613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:36216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.417 [2024-10-09 03:18:21.970627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.970643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:36224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.417 [2024-10-09 03:18:21.970657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.970673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:36232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.417 [2024-10-09 03:18:21.970687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.970703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.417 [2024-10-09 03:18:21.970717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.970733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:36248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.417 [2024-10-09 03:18:21.970747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.970763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:36256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.417 [2024-10-09 03:18:21.970777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.970793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.417 [2024-10-09 03:18:21.970807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.970825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:36272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.417 [2024-10-09 03:18:21.970839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.970856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.417 [2024-10-09 03:18:21.970870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.970892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:36288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.417 [2024-10-09 03:18:21.970908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.970933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:36296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.417 [2024-10-09 03:18:21.970949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.970965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:36304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.417 [2024-10-09 03:18:21.970979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.970995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:36312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.417 [2024-10-09 03:18:21.971009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.971025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:36320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.417 [2024-10-09 03:18:21.971040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.971068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.417 [2024-10-09 03:18:21.971083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.971099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:36720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.417 [2024-10-09 03:18:21.971113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.971130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:36728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.417 [2024-10-09 03:18:21.971145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.971161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:36736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.417 [2024-10-09 03:18:21.971175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.971191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:36744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.417 [2024-10-09 03:18:21.971205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.971221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:36752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.417 [2024-10-09 03:18:21.971235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.971251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:36760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.417 [2024-10-09 03:18:21.971266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.971282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:36768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.417 [2024-10-09 03:18:21.971304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.971325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:36776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.417 [2024-10-09 03:18:21.971340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.971356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:36784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.417 [2024-10-09 03:18:21.971371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.971387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:36792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.417 [2024-10-09 03:18:21.971401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.971417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:36800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.417 [2024-10-09 03:18:21.971431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.971448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:36808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.417 [2024-10-09 03:18:21.971462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.971478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.417 [2024-10-09 03:18:21.971492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.971507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.417 [2024-10-09 03:18:21.971521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.971537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:36832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.417 [2024-10-09 03:18:21.971552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.417 [2024-10-09 03:18:21.971568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:36328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.417 [2024-10-09 03:18:21.971582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.418 [2024-10-09 03:18:21.971597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:36336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.418 [2024-10-09 03:18:21.971612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.418 [2024-10-09 03:18:21.971627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:36344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.418 [2024-10-09 03:18:21.971642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.418 [2024-10-09 03:18:21.971657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:36352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.418 [2024-10-09 03:18:21.971672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.418 [2024-10-09 03:18:21.971693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:36360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.418 [2024-10-09 03:18:21.971709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.418 [2024-10-09 03:18:21.971725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:36368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.418 [2024-10-09 03:18:21.971740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.418 [2024-10-09 03:18:21.971755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:36376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.418 [2024-10-09 03:18:21.971769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.418 [2024-10-09 03:18:21.971785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.418 [2024-10-09 03:18:21.971800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.418 [2024-10-09 03:18:21.971816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:36840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.418 [2024-10-09 03:18:21.971829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.418 [2024-10-09 03:18:21.971846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:36848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.418 [2024-10-09 03:18:21.971871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.418 [2024-10-09 03:18:21.971887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:36856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.418 [2024-10-09 03:18:21.971902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.418 [2024-10-09 03:18:21.971918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:36864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.418 [2024-10-09 03:18:21.971932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.418 [2024-10-09 03:18:21.971949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:36872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.418 [2024-10-09 03:18:21.971963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.418 [2024-10-09 03:18:21.971979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:36880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.418 [2024-10-09 03:18:21.971994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.418 [2024-10-09 03:18:21.972010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:36888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.418 [2024-10-09 03:18:21.972024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.418 [2024-10-09 03:18:21.972040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:36896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.418 [2024-10-09 03:18:21.972065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.418 [2024-10-09 03:18:21.972083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:36392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.418 [2024-10-09 03:18:21.972097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.418 [2024-10-09 03:18:21.972124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:36400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.418 [2024-10-09 03:18:21.972140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.418 [2024-10-09 03:18:21.972156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.418 [2024-10-09 03:18:21.972171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.418 [2024-10-09 03:18:21.972187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.418 [2024-10-09 03:18:21.972201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.418 [2024-10-09 03:18:21.972217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:36424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.418 [2024-10-09 03:18:21.972231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.418 [2024-10-09 03:18:21.972247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:36432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.418 [2024-10-09 03:18:21.972261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.418 [2024-10-09 03:18:21.972277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.418 [2024-10-09 03:18:21.972291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.418 [2024-10-09 03:18:21.972306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657270 is same with the state(6) to be set 00:15:45.418 [2024-10-09 03:18:21.972324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:45.418 [2024-10-09 03:18:21.972335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:45.418 [2024-10-09 03:18:21.972346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36448 len:8 PRP1 0x0 PRP2 0x0 00:15:45.418 [2024-10-09 03:18:21.972365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.418 [2024-10-09 03:18:21.972422] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1657270 was disconnected and freed. reset controller. 00:15:45.418 [2024-10-09 03:18:21.972440] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:15:45.418 [2024-10-09 03:18:21.972495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.418 [2024-10-09 03:18:21.972517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.418 [2024-10-09 03:18:21.972533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.418 [2024-10-09 03:18:21.972547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.418 [2024-10-09 03:18:21.972562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.418 [2024-10-09 03:18:21.972576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.418 [2024-10-09 03:18:21.972591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.418 [2024-10-09 03:18:21.972615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.418 [2024-10-09 03:18:21.972630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:45.418 [2024-10-09 03:18:21.972666] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e42e0 (9): Bad file descriptor 00:15:45.418 [2024-10-09 03:18:21.976500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:45.418 [2024-10-09 03:18:22.009555] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:45.418 8877.40 IOPS, 34.68 MiB/s [2024-10-09T03:18:28.721Z] 8954.73 IOPS, 34.98 MiB/s [2024-10-09T03:18:28.721Z] 8986.50 IOPS, 35.10 MiB/s [2024-10-09T03:18:28.721Z] 9016.46 IOPS, 35.22 MiB/s [2024-10-09T03:18:28.721Z] 9047.29 IOPS, 35.34 MiB/s [2024-10-09T03:18:28.721Z] 9058.00 IOPS, 35.38 MiB/s 00:15:45.418 Latency(us) 00:15:45.418 [2024-10-09T03:18:28.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.418 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:45.418 Verification LBA range: start 0x0 length 0x4000 00:15:45.418 NVMe0n1 : 15.01 9059.28 35.39 231.66 0.00 13745.70 610.68 28716.68 00:15:45.418 [2024-10-09T03:18:28.721Z] =================================================================================================================== 00:15:45.418 [2024-10-09T03:18:28.721Z] Total : 9059.28 35.39 231.66 0.00 13745.70 610.68 28716.68 00:15:45.418 Received shutdown signal, test time was about 15.000000 seconds 00:15:45.418 00:15:45.418 Latency(us) 00:15:45.418 [2024-10-09T03:18:28.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.418 [2024-10-09T03:18:28.721Z] =================================================================================================================== 00:15:45.418 [2024-10-09T03:18:28.721Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:45.418 03:18:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:45.418 03:18:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:15:45.418 03:18:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:15:45.418 03:18:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75447 00:15:45.418 03:18:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:45.418 03:18:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75447 /var/tmp/bdevperf.sock 00:15:45.418 03:18:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 75447 ']' 00:15:45.418 03:18:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:45.418 03:18:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:45.418 03:18:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:45.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:45.419 03:18:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:45.419 03:18:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:45.678 03:18:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:45.678 03:18:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:15:45.678 03:18:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:45.937 [2024-10-09 03:18:29.090840] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:45.937 03:18:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:46.196 [2024-10-09 03:18:29.395948] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:15:46.196 03:18:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:46.455 NVMe0n1 00:15:46.455 03:18:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:47.022 00:15:47.022 03:18:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:47.281 00:15:47.281 03:18:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:47.281 03:18:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:15:47.540 03:18:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:47.798 03:18:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:15:51.086 03:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:51.086 03:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:15:51.086 03:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75534 00:15:51.086 03:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:51.086 03:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75534 00:15:52.464 { 00:15:52.464 "results": [ 00:15:52.464 { 00:15:52.464 "job": "NVMe0n1", 00:15:52.464 "core_mask": "0x1", 00:15:52.464 "workload": "verify", 00:15:52.464 "status": "finished", 00:15:52.464 "verify_range": { 00:15:52.464 "start": 0, 00:15:52.464 "length": 16384 00:15:52.464 }, 00:15:52.464 "queue_depth": 128, 00:15:52.464 "io_size": 4096, 00:15:52.464 "runtime": 1.016361, 00:15:52.464 "iops": 7325.153169001959, 00:15:52.464 "mibps": 28.613879566413903, 00:15:52.464 "io_failed": 0, 00:15:52.464 "io_timeout": 0, 00:15:52.464 "avg_latency_us": 17405.41671164296, 00:15:52.464 "min_latency_us": 2189.498181818182, 00:15:52.464 "max_latency_us": 15013.701818181818 00:15:52.464 } 00:15:52.464 ], 00:15:52.464 "core_count": 1 00:15:52.464 } 00:15:52.464 03:18:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:52.464 [2024-10-09 03:18:27.856971] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:15:52.464 [2024-10-09 03:18:27.857105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75447 ] 00:15:52.464 [2024-10-09 03:18:27.988937] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.464 [2024-10-09 03:18:28.100269] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.464 [2024-10-09 03:18:28.157044] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:52.464 [2024-10-09 03:18:30.989930] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:15:52.464 [2024-10-09 03:18:30.990113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.464 [2024-10-09 03:18:30.990140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.464 [2024-10-09 03:18:30.990160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.464 [2024-10-09 03:18:30.990174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.464 [2024-10-09 03:18:30.990189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.464 [2024-10-09 03:18:30.990202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.464 [2024-10-09 03:18:30.990216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.464 [2024-10-09 03:18:30.990230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.464 [2024-10-09 03:18:30.990245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:52.464 [2024-10-09 03:18:30.990294] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:52.464 [2024-10-09 03:18:30.990325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c002e0 (9): Bad file descriptor 00:15:52.464 [2024-10-09 03:18:30.995260] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:52.464 Running I/O for 1 seconds... 00:15:52.464 7317.00 IOPS, 28.58 MiB/s 00:15:52.464 Latency(us) 00:15:52.464 [2024-10-09T03:18:35.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.464 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:52.464 Verification LBA range: start 0x0 length 0x4000 00:15:52.464 NVMe0n1 : 1.02 7325.15 28.61 0.00 0.00 17405.42 2189.50 15013.70 00:15:52.464 [2024-10-09T03:18:35.767Z] =================================================================================================================== 00:15:52.464 [2024-10-09T03:18:35.767Z] Total : 7325.15 28.61 0.00 0.00 17405.42 2189.50 15013.70 00:15:52.464 03:18:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:52.465 03:18:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:15:52.724 03:18:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:52.984 03:18:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:15:52.984 03:18:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:53.243 03:18:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:53.514 03:18:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:15:56.816 03:18:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:56.816 03:18:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:15:56.816 03:18:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75447 00:15:56.816 03:18:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75447 ']' 00:15:56.816 03:18:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75447 00:15:56.816 03:18:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:15:56.816 03:18:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:56.816 03:18:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75447 00:15:56.816 killing process with pid 75447 00:15:56.816 03:18:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:56.816 03:18:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:56.816 03:18:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75447' 00:15:56.816 03:18:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 75447 00:15:56.816 03:18:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 75447 00:15:56.816 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:15:56.816 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:57.074 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:57.074 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:57.074 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:15:57.074 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:57.074 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:15:57.074 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:57.074 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:15:57.075 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:57.075 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:57.075 rmmod nvme_tcp 00:15:57.333 rmmod nvme_fabrics 00:15:57.333 rmmod nvme_keyring 00:15:57.333 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:57.333 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:15:57.333 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:15:57.333 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 75186 ']' 00:15:57.333 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 75186 00:15:57.333 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75186 ']' 00:15:57.333 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75186 00:15:57.333 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:15:57.333 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:57.333 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75186 00:15:57.333 killing process with pid 75186 00:15:57.333 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:57.333 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:57.333 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75186' 00:15:57.334 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 75186 00:15:57.334 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 75186 00:15:57.592 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:57.592 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:57.592 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:57.592 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:15:57.592 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:15:57.592 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:15:57.592 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:57.592 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:57.592 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:57.592 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:57.592 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:57.592 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:57.592 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:57.592 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:57.592 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:57.592 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:57.592 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:57.592 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:57.592 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:57.592 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:57.592 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:57.851 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:57.851 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:57.851 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.851 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:57.851 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.851 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:15:57.851 00:15:57.851 real 0m34.090s 00:15:57.851 user 2m11.217s 00:15:57.851 sys 0m5.746s 00:15:57.851 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:57.851 03:18:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:57.851 ************************************ 00:15:57.851 END TEST nvmf_failover 00:15:57.851 ************************************ 00:15:57.851 03:18:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:57.851 03:18:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:57.851 03:18:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:57.851 03:18:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.851 ************************************ 00:15:57.851 START TEST nvmf_host_discovery 00:15:57.851 ************************************ 00:15:57.851 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:57.851 * Looking for test storage... 00:15:57.851 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:57.851 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:57.851 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:15:57.851 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:58.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.112 --rc genhtml_branch_coverage=1 00:15:58.112 --rc genhtml_function_coverage=1 00:15:58.112 --rc genhtml_legend=1 00:15:58.112 --rc geninfo_all_blocks=1 00:15:58.112 --rc geninfo_unexecuted_blocks=1 00:15:58.112 00:15:58.112 ' 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:58.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.112 --rc genhtml_branch_coverage=1 00:15:58.112 --rc genhtml_function_coverage=1 00:15:58.112 --rc genhtml_legend=1 00:15:58.112 --rc geninfo_all_blocks=1 00:15:58.112 --rc geninfo_unexecuted_blocks=1 00:15:58.112 00:15:58.112 ' 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:58.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.112 --rc genhtml_branch_coverage=1 00:15:58.112 --rc genhtml_function_coverage=1 00:15:58.112 --rc genhtml_legend=1 00:15:58.112 --rc geninfo_all_blocks=1 00:15:58.112 --rc geninfo_unexecuted_blocks=1 00:15:58.112 00:15:58.112 ' 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:58.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.112 --rc genhtml_branch_coverage=1 00:15:58.112 --rc genhtml_function_coverage=1 00:15:58.112 --rc genhtml_legend=1 00:15:58.112 --rc geninfo_all_blocks=1 00:15:58.112 --rc geninfo_unexecuted_blocks=1 00:15:58.112 00:15:58.112 ' 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:15:58.112 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:58.113 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@458 -- # nvmf_veth_init 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:58.113 Cannot find device "nvmf_init_br" 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:58.113 Cannot find device "nvmf_init_br2" 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:58.113 Cannot find device "nvmf_tgt_br" 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:58.113 Cannot find device "nvmf_tgt_br2" 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:58.113 Cannot find device "nvmf_init_br" 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:58.113 Cannot find device "nvmf_init_br2" 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:58.113 Cannot find device "nvmf_tgt_br" 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:58.113 Cannot find device "nvmf_tgt_br2" 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:58.113 Cannot find device "nvmf_br" 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:58.113 Cannot find device "nvmf_init_if" 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:58.113 Cannot find device "nvmf_init_if2" 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:58.113 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:58.113 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:58.113 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:58.114 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:58.373 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:58.373 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:58.373 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:58.373 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:58.373 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:58.373 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:58.373 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:58.373 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:58.373 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:58.373 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:58.373 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:58.373 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:58.373 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:58.373 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:58.373 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:58.373 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:58.373 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:58.373 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:58.373 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:58.373 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:58.373 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:58.373 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:58.373 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:58.373 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:58.373 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:58.373 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:58.373 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:58.373 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:15:58.373 00:15:58.373 --- 10.0.0.3 ping statistics --- 00:15:58.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.373 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:15:58.373 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:58.373 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:58.373 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:15:58.373 00:15:58.373 --- 10.0.0.4 ping statistics --- 00:15:58.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.373 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:58.373 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:58.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:58.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:15:58.373 00:15:58.373 --- 10.0.0.1 ping statistics --- 00:15:58.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.373 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:58.373 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:58.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:58.373 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:15:58.373 00:15:58.373 --- 10.0.0.2 ping statistics --- 00:15:58.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.373 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:15:58.373 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:58.373 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # return 0 00:15:58.373 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:58.374 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:58.374 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:58.374 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:58.374 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:58.374 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:58.374 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:58.374 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:58.374 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:58.374 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:58.374 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.374 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=75869 00:15:58.374 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:58.374 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 75869 00:15:58.374 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 75869 ']' 00:15:58.374 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.374 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:58.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.374 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.374 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:58.374 03:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.374 [2024-10-09 03:18:41.659752] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:15:58.374 [2024-10-09 03:18:41.659841] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.633 [2024-10-09 03:18:41.798481] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.633 [2024-10-09 03:18:41.898433] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:58.633 [2024-10-09 03:18:41.898497] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:58.633 [2024-10-09 03:18:41.898512] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:58.633 [2024-10-09 03:18:41.898523] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:58.633 [2024-10-09 03:18:41.898539] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:58.633 [2024-10-09 03:18:41.898990] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.892 [2024-10-09 03:18:41.953163] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:59.466 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:59.466 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:15:59.466 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:59.466 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:59.466 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.466 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:59.466 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:59.466 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.466 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.466 [2024-10-09 03:18:42.692102] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:59.466 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.466 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:15:59.466 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.466 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.466 [2024-10-09 03:18:42.700260] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:15:59.466 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.466 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:59.467 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.467 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.467 null0 00:15:59.467 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.467 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:59.467 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.467 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.467 null1 00:15:59.467 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.467 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:59.467 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.467 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.467 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.467 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75901 00:15:59.467 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:59.467 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75901 /tmp/host.sock 00:15:59.467 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 75901 ']' 00:15:59.467 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:15:59.467 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:59.467 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:59.467 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:59.467 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:59.467 03:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.726 [2024-10-09 03:18:42.790541] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:15:59.726 [2024-10-09 03:18:42.790655] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75901 ] 00:15:59.726 [2024-10-09 03:18:42.931500] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.985 [2024-10-09 03:18:43.036970] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.985 [2024-10-09 03:18:43.094002] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:00.553 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:00.553 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:16:00.554 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:00.554 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:16:00.554 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.554 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.554 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.554 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:16:00.554 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.554 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.554 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.554 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:16:00.554 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:16:00.554 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:00.554 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.554 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.554 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:00.554 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:00.554 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:00.554 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.812 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:00.812 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:16:00.812 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:00.812 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.812 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.812 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:00.812 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:00.812 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:00.812 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.812 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:16:00.812 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:16:00.812 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.812 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.813 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.813 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:16:00.813 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:00.813 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:00.813 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.813 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.813 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:00.813 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:00.813 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.813 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:00.813 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:16:00.813 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:00.813 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:00.813 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.813 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:00.813 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.813 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:00.813 03:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.813 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:16:00.813 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:00.813 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.813 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.813 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.813 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:16:00.813 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:00.813 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.813 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.813 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:00.813 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:00.813 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:00.813 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.813 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:16:00.813 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:16:00.813 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:00.813 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:00.813 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:00.813 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.813 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.813 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:00.813 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.072 [2024-10-09 03:18:44.140596] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:16:01.072 03:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:16:01.656 [2024-10-09 03:18:44.805670] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:01.656 [2024-10-09 03:18:44.805719] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:01.656 [2024-10-09 03:18:44.805739] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:01.656 [2024-10-09 03:18:44.811715] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:16:01.656 [2024-10-09 03:18:44.869041] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:01.656 [2024-10-09 03:18:44.869084] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:02.223 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:02.223 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:02.223 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:16:02.223 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:02.223 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:02.223 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.223 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.223 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:02.223 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:02.223 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.223 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.223 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:02.223 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:02.223 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:02.223 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:02.223 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:02.223 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:16:02.223 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:16:02.223 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:02.223 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:02.223 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.223 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:02.224 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.224 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:02.224 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.224 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:02.224 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:02.224 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:02.224 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:02.224 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:02.224 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:02.224 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:16:02.224 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:16:02.224 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:02.224 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.224 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.224 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:02.224 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:02.224 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:02.224 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.483 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:16:02.483 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:02.483 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:16:02.483 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:02.483 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:02.483 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:02.483 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:02.483 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:02.483 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:02.483 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:02.483 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:02.483 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.483 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.483 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:02.483 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.483 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:02.483 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:16:02.483 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:02.483 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:02.483 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:02.483 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.483 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.483 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.483 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:02.483 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:02.483 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:02.483 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:02.483 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:02.483 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:16:02.483 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:02.483 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.483 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.484 [2024-10-09 03:18:45.738145] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:02.484 [2024-10-09 03:18:45.739031] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:16:02.484 [2024-10-09 03:18:45.739073] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:02.484 [2024-10-09 03:18:45.745094] bdev_nvme.c:7077:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.484 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:02.759 [2024-10-09 03:18:45.809553] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:02.759 [2024-10-09 03:18:45.809576] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:02.759 [2024-10-09 03:18:45.809583] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:02.759 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:02.760 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:02.760 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:02.760 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.760 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.760 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.760 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:02.760 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:02.760 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:02.760 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:02.760 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:02.760 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.760 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.760 [2024-10-09 03:18:45.995362] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:16:02.760 [2024-10-09 03:18:45.995392] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:02.760 [2024-10-09 03:18:45.996506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.760 [2024-10-09 03:18:45.996549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.760 [2024-10-09 03:18:45.996578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.760 [2024-10-09 03:18:45.996587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.760 [2024-10-09 03:18:45.996596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.760 [2024-10-09 03:18:45.996604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.760 [2024-10-09 03:18:45.996613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.760 [2024-10-09 03:18:45.996621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.760 [2024-10-09 03:18:45.996629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db6950 is same with the state(6) to be set 00:16:02.760 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.760 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:02.760 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:02.760 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:02.760 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:02.760 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:02.760 03:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:16:02.760 [2024-10-09 03:18:46.001387] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:16:02.760 [2024-10-09 03:18:46.001425] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:02.760 [2024-10-09 03:18:46.001502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1db6950 (9): Bad file descriptor 00:16:02.760 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:02.760 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:02.760 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:02.760 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:02.760 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.760 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.760 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.760 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.760 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:03.022 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:03.022 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:03.022 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:03.022 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:03.022 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:03.022 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:16:03.022 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:03.022 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.023 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.283 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:16:03.283 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:03.283 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:16:03.283 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:16:03.283 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:03.283 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:03.283 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:03.283 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:03.283 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:03.283 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:03.283 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:03.283 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:03.283 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.283 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.283 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.283 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:16:03.283 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:16:03.283 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:03.283 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:03.283 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:03.283 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.283 03:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.219 [2024-10-09 03:18:47.426302] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:04.219 [2024-10-09 03:18:47.426505] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:04.219 [2024-10-09 03:18:47.426554] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:04.219 [2024-10-09 03:18:47.432333] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:16:04.219 [2024-10-09 03:18:47.493297] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:04.219 [2024-10-09 03:18:47.493335] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:04.219 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.219 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:04.219 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:16:04.219 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:04.219 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:04.219 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:04.219 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:04.219 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:04.219 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:04.219 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.219 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.219 request: 00:16:04.219 { 00:16:04.219 "name": "nvme", 00:16:04.219 "trtype": "tcp", 00:16:04.219 "traddr": "10.0.0.3", 00:16:04.219 "adrfam": "ipv4", 00:16:04.219 "trsvcid": "8009", 00:16:04.219 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:04.219 "wait_for_attach": true, 00:16:04.219 "method": "bdev_nvme_start_discovery", 00:16:04.219 "req_id": 1 00:16:04.219 } 00:16:04.219 Got JSON-RPC error response 00:16:04.219 response: 00:16:04.220 { 00:16:04.220 "code": -17, 00:16:04.220 "message": "File exists" 00:16:04.220 } 00:16:04.220 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:04.220 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:16:04.220 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:04.220 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:04.220 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:04.220 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:16:04.220 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:04.220 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:04.220 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.220 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.220 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:04.220 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:04.479 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.479 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:16:04.479 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:16:04.479 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:04.479 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:04.479 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.479 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:04.479 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:04.479 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.479 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.479 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:04.479 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:04.479 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:16:04.479 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:04.479 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:04.479 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:04.479 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:04.479 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:04.479 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:04.479 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.479 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.479 request: 00:16:04.479 { 00:16:04.479 "name": "nvme_second", 00:16:04.479 "trtype": "tcp", 00:16:04.479 "traddr": "10.0.0.3", 00:16:04.479 "adrfam": "ipv4", 00:16:04.479 "trsvcid": "8009", 00:16:04.479 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:04.479 "wait_for_attach": true, 00:16:04.479 "method": "bdev_nvme_start_discovery", 00:16:04.479 "req_id": 1 00:16:04.479 } 00:16:04.479 Got JSON-RPC error response 00:16:04.479 response: 00:16:04.479 { 00:16:04.479 "code": -17, 00:16:04.479 "message": "File exists" 00:16:04.479 } 00:16:04.479 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:04.479 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:16:04.479 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:04.479 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:04.479 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:04.479 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:16:04.479 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:04.479 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.479 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:04.479 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.479 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:04.479 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:04.479 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.480 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:16:04.480 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:16:04.480 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:04.480 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.480 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.480 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:04.480 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:04.480 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:04.480 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.480 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:04.480 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:04.480 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:16:04.480 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:04.480 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:04.480 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:04.480 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:04.480 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:04.480 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:04.480 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.480 03:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:05.858 [2024-10-09 03:18:48.765665] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:05.858 [2024-10-09 03:18:48.765731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e258f0 with addr=10.0.0.3, port=8010 00:16:05.858 [2024-10-09 03:18:48.765755] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:05.858 [2024-10-09 03:18:48.765765] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:05.858 [2024-10-09 03:18:48.765773] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:16:06.795 [2024-10-09 03:18:49.765673] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:06.795 [2024-10-09 03:18:49.765738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e258f0 with addr=10.0.0.3, port=8010 00:16:06.795 [2024-10-09 03:18:49.765761] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:06.795 [2024-10-09 03:18:49.765771] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:06.795 [2024-10-09 03:18:49.765780] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:16:07.732 [2024-10-09 03:18:50.765554] bdev_nvme.c:7196:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:16:07.732 request: 00:16:07.732 { 00:16:07.732 "name": "nvme_second", 00:16:07.732 "trtype": "tcp", 00:16:07.732 "traddr": "10.0.0.3", 00:16:07.732 "adrfam": "ipv4", 00:16:07.732 "trsvcid": "8010", 00:16:07.732 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:07.732 "wait_for_attach": false, 00:16:07.732 "attach_timeout_ms": 3000, 00:16:07.732 "method": "bdev_nvme_start_discovery", 00:16:07.732 "req_id": 1 00:16:07.732 } 00:16:07.732 Got JSON-RPC error response 00:16:07.732 response: 00:16:07.732 { 00:16:07.732 "code": -110, 00:16:07.732 "message": "Connection timed out" 00:16:07.732 } 00:16:07.732 03:18:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:07.732 03:18:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:16:07.732 03:18:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:07.732 03:18:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:07.732 03:18:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:07.732 03:18:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:16:07.732 03:18:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:07.732 03:18:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:07.732 03:18:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.732 03:18:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:07.732 03:18:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.732 03:18:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:07.732 03:18:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.732 03:18:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:16:07.732 03:18:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:16:07.732 03:18:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75901 00:16:07.732 03:18:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:16:07.732 03:18:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:07.732 03:18:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:16:07.732 03:18:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:07.732 03:18:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:16:07.732 03:18:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:07.732 03:18:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:07.732 rmmod nvme_tcp 00:16:07.732 rmmod nvme_fabrics 00:16:07.732 rmmod nvme_keyring 00:16:07.732 03:18:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:07.732 03:18:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:16:07.733 03:18:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:16:07.733 03:18:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 75869 ']' 00:16:07.733 03:18:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 75869 00:16:07.733 03:18:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 75869 ']' 00:16:07.733 03:18:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 75869 00:16:07.733 03:18:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:16:07.733 03:18:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:07.733 03:18:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75869 00:16:07.733 killing process with pid 75869 00:16:07.733 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:07.733 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:07.733 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75869' 00:16:07.733 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 75869 00:16:07.733 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 75869 00:16:07.991 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:07.991 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:07.991 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:07.991 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:16:07.991 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:16:07.991 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:07.991 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:16:07.991 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:07.991 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:07.991 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:07.991 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:07.991 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:07.991 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:08.251 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:08.251 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:08.251 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:08.251 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:08.251 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:08.251 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:08.251 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:08.251 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:08.251 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:08.251 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:08.251 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.251 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:08.251 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.251 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:16:08.251 00:16:08.251 real 0m10.461s 00:16:08.251 user 0m19.731s 00:16:08.251 sys 0m2.073s 00:16:08.251 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:08.251 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:08.251 ************************************ 00:16:08.251 END TEST nvmf_host_discovery 00:16:08.251 ************************************ 00:16:08.251 03:18:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:08.251 03:18:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:08.251 03:18:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:08.251 03:18:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.251 ************************************ 00:16:08.251 START TEST nvmf_host_multipath_status 00:16:08.251 ************************************ 00:16:08.251 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:08.511 * Looking for test storage... 00:16:08.511 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:08.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.511 --rc genhtml_branch_coverage=1 00:16:08.511 --rc genhtml_function_coverage=1 00:16:08.511 --rc genhtml_legend=1 00:16:08.511 --rc geninfo_all_blocks=1 00:16:08.511 --rc geninfo_unexecuted_blocks=1 00:16:08.511 00:16:08.511 ' 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:08.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.511 --rc genhtml_branch_coverage=1 00:16:08.511 --rc genhtml_function_coverage=1 00:16:08.511 --rc genhtml_legend=1 00:16:08.511 --rc geninfo_all_blocks=1 00:16:08.511 --rc geninfo_unexecuted_blocks=1 00:16:08.511 00:16:08.511 ' 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:08.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.511 --rc genhtml_branch_coverage=1 00:16:08.511 --rc genhtml_function_coverage=1 00:16:08.511 --rc genhtml_legend=1 00:16:08.511 --rc geninfo_all_blocks=1 00:16:08.511 --rc geninfo_unexecuted_blocks=1 00:16:08.511 00:16:08.511 ' 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:08.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.511 --rc genhtml_branch_coverage=1 00:16:08.511 --rc genhtml_function_coverage=1 00:16:08.511 --rc genhtml_legend=1 00:16:08.511 --rc geninfo_all_blocks=1 00:16:08.511 --rc geninfo_unexecuted_blocks=1 00:16:08.511 00:16:08.511 ' 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.511 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:08.512 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # nvmf_veth_init 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:08.512 Cannot find device "nvmf_init_br" 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:08.512 Cannot find device "nvmf_init_br2" 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:08.512 Cannot find device "nvmf_tgt_br" 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:16:08.512 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:08.772 Cannot find device "nvmf_tgt_br2" 00:16:08.772 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:16:08.772 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:08.772 Cannot find device "nvmf_init_br" 00:16:08.772 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:16:08.772 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:08.772 Cannot find device "nvmf_init_br2" 00:16:08.772 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:16:08.772 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:08.772 Cannot find device "nvmf_tgt_br" 00:16:08.772 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:16:08.772 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:08.772 Cannot find device "nvmf_tgt_br2" 00:16:08.772 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:16:08.772 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:08.772 Cannot find device "nvmf_br" 00:16:08.772 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:16:08.772 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:08.772 Cannot find device "nvmf_init_if" 00:16:08.772 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:16:08.772 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:08.772 Cannot find device "nvmf_init_if2" 00:16:08.772 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:16:08.772 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:08.772 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:08.772 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:16:08.772 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:08.772 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:08.772 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:16:08.772 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:08.772 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:08.772 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:08.772 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:08.772 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:08.772 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:08.773 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:08.773 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:08.773 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:08.773 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:08.773 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:08.773 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:08.773 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:08.773 03:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:08.773 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:08.773 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:08.773 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:08.773 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:08.773 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:08.773 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:08.773 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:08.773 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:08.773 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:08.773 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:08.773 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:09.034 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:09.034 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:09.034 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:09.034 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:09.034 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:09.034 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:09.034 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:09.034 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:09.034 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:09.034 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:16:09.034 00:16:09.034 --- 10.0.0.3 ping statistics --- 00:16:09.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.034 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:16:09.034 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:09.034 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:09.034 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:16:09.034 00:16:09.034 --- 10.0.0.4 ping statistics --- 00:16:09.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.034 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:16:09.034 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:09.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:09.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:16:09.034 00:16:09.034 --- 10.0.0.1 ping statistics --- 00:16:09.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.034 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:16:09.034 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:09.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:09.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:16:09.034 00:16:09.034 --- 10.0.0.2 ping statistics --- 00:16:09.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.034 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:16:09.034 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:09.034 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # return 0 00:16:09.034 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:09.034 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:09.034 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:09.034 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:09.034 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:09.034 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:09.034 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:09.034 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:16:09.034 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:09.034 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:09.034 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:09.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.034 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=76414 00:16:09.034 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 76414 00:16:09.034 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 76414 ']' 00:16:09.034 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:09.034 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.034 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:09.034 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.034 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:09.034 03:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:09.034 [2024-10-09 03:18:52.210102] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:16:09.034 [2024-10-09 03:18:52.210180] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:09.294 [2024-10-09 03:18:52.344891] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:09.294 [2024-10-09 03:18:52.439117] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:09.294 [2024-10-09 03:18:52.439465] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:09.294 [2024-10-09 03:18:52.439667] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:09.294 [2024-10-09 03:18:52.439782] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:09.294 [2024-10-09 03:18:52.439875] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:09.294 [2024-10-09 03:18:52.440523] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.294 [2024-10-09 03:18:52.440532] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.294 [2024-10-09 03:18:52.498230] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:10.232 03:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:10.232 03:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:16:10.232 03:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:10.232 03:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:10.232 03:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:10.232 03:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:10.232 03:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76414 00:16:10.232 03:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:10.492 [2024-10-09 03:18:53.580326] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:10.492 03:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:10.751 Malloc0 00:16:10.751 03:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:11.010 03:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:11.269 03:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:11.528 [2024-10-09 03:18:54.617389] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:11.528 03:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:11.788 [2024-10-09 03:18:54.905639] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:11.788 03:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76470 00:16:11.788 03:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:11.788 03:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:11.788 03:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76470 /var/tmp/bdevperf.sock 00:16:11.788 03:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 76470 ']' 00:16:11.788 03:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:11.788 03:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:11.788 03:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:11.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:11.788 03:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:11.788 03:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:13.169 03:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:13.169 03:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:16:13.169 03:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:13.169 03:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:13.429 Nvme0n1 00:16:13.429 03:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:13.997 Nvme0n1 00:16:13.997 03:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:13.997 03:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:16:15.903 03:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:16:15.903 03:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:16.163 03:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:16.423 03:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:16:17.408 03:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:16:17.408 03:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:17.408 03:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.408 03:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:17.667 03:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.667 03:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:17.667 03:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.667 03:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:18.235 03:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:18.235 03:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:18.235 03:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:18.235 03:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:18.494 03:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:18.494 03:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:18.494 03:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:18.494 03:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:18.754 03:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:18.754 03:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:18.754 03:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:18.754 03:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:19.013 03:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:19.013 03:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:19.013 03:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:19.013 03:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:19.581 03:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:19.581 03:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:16:19.581 03:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:19.581 03:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:20.149 03:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:16:21.085 03:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:16:21.085 03:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:21.085 03:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.085 03:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:21.343 03:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:21.343 03:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:21.343 03:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.343 03:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:21.601 03:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:21.601 03:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:21.601 03:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.601 03:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:21.860 03:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:21.860 03:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:21.860 03:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.860 03:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:22.120 03:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:22.120 03:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:22.120 03:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:22.120 03:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:22.688 03:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:22.688 03:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:22.688 03:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:22.688 03:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:22.688 03:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:22.688 03:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:16:22.688 03:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:22.947 03:19:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:16:23.516 03:19:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:16:24.454 03:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:16:24.454 03:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:24.454 03:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:24.454 03:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:24.713 03:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:24.713 03:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:24.713 03:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:24.713 03:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:24.977 03:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:24.977 03:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:24.977 03:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:24.977 03:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:25.235 03:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:25.235 03:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:25.235 03:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:25.235 03:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:25.494 03:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:25.494 03:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:25.494 03:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:25.495 03:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:25.754 03:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:25.754 03:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:25.754 03:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:25.754 03:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:26.012 03:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:26.012 03:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:16:26.012 03:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:26.272 03:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:26.533 03:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:16:27.471 03:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:16:27.471 03:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:27.471 03:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.471 03:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:27.730 03:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:27.730 03:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:27.730 03:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.730 03:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:27.990 03:19:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:27.990 03:19:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:27.990 03:19:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.990 03:19:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:28.249 03:19:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:28.249 03:19:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:28.249 03:19:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:28.249 03:19:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:28.509 03:19:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:28.509 03:19:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:28.509 03:19:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:28.509 03:19:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:29.078 03:19:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:29.078 03:19:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:29.078 03:19:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:29.078 03:19:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:29.337 03:19:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:29.337 03:19:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:16:29.337 03:19:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:29.596 03:19:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:29.856 03:19:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:16:30.793 03:19:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:16:30.793 03:19:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:30.793 03:19:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:30.793 03:19:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:31.053 03:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:31.053 03:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:31.053 03:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:31.053 03:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:31.313 03:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:31.313 03:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:31.313 03:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:31.313 03:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:31.573 03:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:31.573 03:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:31.573 03:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:31.573 03:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:32.141 03:19:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:32.141 03:19:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:32.141 03:19:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.141 03:19:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:32.400 03:19:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:32.400 03:19:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:32.400 03:19:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.400 03:19:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:32.659 03:19:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:32.659 03:19:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:16:32.659 03:19:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:32.917 03:19:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:33.176 03:19:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:16:34.114 03:19:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:16:34.114 03:19:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:34.114 03:19:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:34.114 03:19:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:34.373 03:19:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:34.373 03:19:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:34.373 03:19:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:34.373 03:19:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:34.942 03:19:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:34.942 03:19:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:34.942 03:19:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:34.942 03:19:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:35.201 03:19:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:35.201 03:19:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:35.201 03:19:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:35.202 03:19:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:35.461 03:19:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:35.461 03:19:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:35.461 03:19:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:35.461 03:19:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:35.720 03:19:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:35.720 03:19:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:35.720 03:19:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:35.720 03:19:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:35.980 03:19:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:35.980 03:19:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:16:36.239 03:19:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:16:36.239 03:19:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:36.499 03:19:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:37.067 03:19:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:16:38.005 03:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:16:38.005 03:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:38.005 03:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.005 03:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:38.264 03:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:38.264 03:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:38.264 03:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.264 03:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:38.523 03:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:38.523 03:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:38.523 03:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:38.523 03:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.783 03:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:38.783 03:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:38.783 03:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.783 03:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:39.046 03:19:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.046 03:19:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:39.046 03:19:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.046 03:19:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:39.305 03:19:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.305 03:19:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:39.305 03:19:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.305 03:19:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:39.564 03:19:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.564 03:19:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:16:39.564 03:19:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:39.824 03:19:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:40.083 03:19:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:16:41.019 03:19:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:16:41.020 03:19:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:41.020 03:19:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:41.020 03:19:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:41.589 03:19:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:41.589 03:19:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:41.589 03:19:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:41.589 03:19:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:41.589 03:19:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:41.589 03:19:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:41.589 03:19:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:41.589 03:19:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.158 03:19:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:42.158 03:19:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:42.158 03:19:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.158 03:19:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:42.158 03:19:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:42.158 03:19:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:42.158 03:19:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.158 03:19:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:42.417 03:19:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:42.417 03:19:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:42.417 03:19:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:42.417 03:19:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.675 03:19:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:42.675 03:19:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:16:42.675 03:19:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:42.934 03:19:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:16:43.193 03:19:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:16:44.131 03:19:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:16:44.131 03:19:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:44.131 03:19:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:44.131 03:19:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:44.391 03:19:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:44.391 03:19:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:44.391 03:19:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:44.391 03:19:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:44.959 03:19:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:44.959 03:19:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:44.959 03:19:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:44.959 03:19:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:44.959 03:19:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:44.959 03:19:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:44.959 03:19:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:44.959 03:19:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.218 03:19:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:45.218 03:19:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:45.218 03:19:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.218 03:19:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:45.477 03:19:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:45.477 03:19:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:45.477 03:19:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.477 03:19:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:45.739 03:19:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:45.739 03:19:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:16:45.739 03:19:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:45.998 03:19:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:46.258 03:19:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:16:47.644 03:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:16:47.644 03:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:47.644 03:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:47.644 03:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:47.644 03:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:47.644 03:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:47.644 03:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:47.644 03:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:47.914 03:19:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:47.914 03:19:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:47.914 03:19:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:47.914 03:19:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:48.173 03:19:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:48.173 03:19:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:48.173 03:19:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:48.173 03:19:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:48.433 03:19:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:48.433 03:19:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:48.433 03:19:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:48.433 03:19:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:49.001 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:49.001 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:49.001 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:49.001 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:49.001 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:49.001 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76470 00:16:49.001 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 76470 ']' 00:16:49.001 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 76470 00:16:49.001 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:16:49.001 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:49.001 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76470 00:16:49.001 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:16:49.001 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:16:49.001 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76470' 00:16:49.001 killing process with pid 76470 00:16:49.001 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 76470 00:16:49.001 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 76470 00:16:49.260 { 00:16:49.260 "results": [ 00:16:49.260 { 00:16:49.260 "job": "Nvme0n1", 00:16:49.260 "core_mask": "0x4", 00:16:49.260 "workload": "verify", 00:16:49.260 "status": "terminated", 00:16:49.260 "verify_range": { 00:16:49.260 "start": 0, 00:16:49.260 "length": 16384 00:16:49.260 }, 00:16:49.260 "queue_depth": 128, 00:16:49.260 "io_size": 4096, 00:16:49.260 "runtime": 35.146271, 00:16:49.260 "iops": 8139.412570966632, 00:16:49.260 "mibps": 31.794580355338407, 00:16:49.260 "io_failed": 0, 00:16:49.260 "io_timeout": 0, 00:16:49.260 "avg_latency_us": 15695.38243885635, 00:16:49.260 "min_latency_us": 144.29090909090908, 00:16:49.260 "max_latency_us": 4026531.84 00:16:49.260 } 00:16:49.260 ], 00:16:49.260 "core_count": 1 00:16:49.260 } 00:16:49.523 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76470 00:16:49.523 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:49.523 [2024-10-09 03:18:54.982719] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:16:49.523 [2024-10-09 03:18:54.982824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76470 ] 00:16:49.523 [2024-10-09 03:18:55.122370] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.523 [2024-10-09 03:18:55.236395] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:16:49.523 [2024-10-09 03:18:55.289317] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:49.523 Running I/O for 90 seconds... 00:16:49.523 8392.00 IOPS, 32.78 MiB/s [2024-10-09T03:19:32.826Z] 8864.00 IOPS, 34.62 MiB/s [2024-10-09T03:19:32.826Z] 8976.67 IOPS, 35.07 MiB/s [2024-10-09T03:19:32.826Z] 8628.00 IOPS, 33.70 MiB/s [2024-10-09T03:19:32.826Z] 8545.60 IOPS, 33.38 MiB/s [2024-10-09T03:19:32.826Z] 8614.67 IOPS, 33.65 MiB/s [2024-10-09T03:19:32.826Z] 8646.86 IOPS, 33.78 MiB/s [2024-10-09T03:19:32.826Z] 8706.75 IOPS, 34.01 MiB/s [2024-10-09T03:19:32.826Z] 8827.11 IOPS, 34.48 MiB/s [2024-10-09T03:19:32.826Z] 8864.10 IOPS, 34.63 MiB/s [2024-10-09T03:19:32.826Z] 8885.18 IOPS, 34.71 MiB/s [2024-10-09T03:19:32.826Z] 8879.08 IOPS, 34.68 MiB/s [2024-10-09T03:19:32.826Z] 8886.85 IOPS, 34.71 MiB/s [2024-10-09T03:19:32.826Z] 8863.21 IOPS, 34.62 MiB/s [2024-10-09T03:19:32.826Z] 8841.67 IOPS, 34.54 MiB/s [2024-10-09T03:19:32.826Z] [2024-10-09 03:19:12.669528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.523 [2024-10-09 03:19:12.669591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:49.523 [2024-10-09 03:19:12.669663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.523 [2024-10-09 03:19:12.669715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:49.523 [2024-10-09 03:19:12.669737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.523 [2024-10-09 03:19:12.669752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:49.523 [2024-10-09 03:19:12.669773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.523 [2024-10-09 03:19:12.669787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:49.523 [2024-10-09 03:19:12.669808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.523 [2024-10-09 03:19:12.669822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:49.523 [2024-10-09 03:19:12.669843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.523 [2024-10-09 03:19:12.669857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:49.523 [2024-10-09 03:19:12.669878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.523 [2024-10-09 03:19:12.669892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:49.523 [2024-10-09 03:19:12.669913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.523 [2024-10-09 03:19:12.669927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:49.523 [2024-10-09 03:19:12.669948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.523 [2024-10-09 03:19:12.669962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:49.523 [2024-10-09 03:19:12.670008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.523 [2024-10-09 03:19:12.670024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:49.523 [2024-10-09 03:19:12.670087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.523 [2024-10-09 03:19:12.670120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:49.523 [2024-10-09 03:19:12.670141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.523 [2024-10-09 03:19:12.670156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:49.523 [2024-10-09 03:19:12.670207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.523 [2024-10-09 03:19:12.670222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:49.523 [2024-10-09 03:19:12.670242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.524 [2024-10-09 03:19:12.670258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:49.524 [2024-10-09 03:19:12.670279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.524 [2024-10-09 03:19:12.670293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:49.524 [2024-10-09 03:19:12.670314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.524 [2024-10-09 03:19:12.670339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:49.524 [2024-10-09 03:19:12.670390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.524 [2024-10-09 03:19:12.670403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:49.524 [2024-10-09 03:19:12.670423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.524 [2024-10-09 03:19:12.670436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:49.524 [2024-10-09 03:19:12.670455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.524 [2024-10-09 03:19:12.670469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:49.524 [2024-10-09 03:19:12.670488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.524 [2024-10-09 03:19:12.670502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:49.524 [2024-10-09 03:19:12.670521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.524 [2024-10-09 03:19:12.670534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:49.524 [2024-10-09 03:19:12.670564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.524 [2024-10-09 03:19:12.670599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:49.524 [2024-10-09 03:19:12.670619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:46712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.524 [2024-10-09 03:19:12.670634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:49.524 [2024-10-09 03:19:12.670654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:46720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.524 [2024-10-09 03:19:12.670668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:49.524 [2024-10-09 03:19:12.670689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:46728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.524 [2024-10-09 03:19:12.670703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:49.524 [2024-10-09 03:19:12.670723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:46736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.524 [2024-10-09 03:19:12.670737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:49.524 [2024-10-09 03:19:12.670757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:46744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.524 [2024-10-09 03:19:12.670771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:49.524 [2024-10-09 03:19:12.670791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.524 [2024-10-09 03:19:12.670805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:49.524 [2024-10-09 03:19:12.670825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:46760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.524 [2024-10-09 03:19:12.670839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:49.524 [2024-10-09 03:19:12.670858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:46768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.524 [2024-10-09 03:19:12.670872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:49.524 [2024-10-09 03:19:12.670892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.524 [2024-10-09 03:19:12.670906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:49.524 [2024-10-09 03:19:12.670926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.524 [2024-10-09 03:19:12.670940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:49.524 [2024-10-09 03:19:12.670978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.524 [2024-10-09 03:19:12.670993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:49.524 [2024-10-09 03:19:12.671023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.524 [2024-10-09 03:19:12.671038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:49.524 [2024-10-09 03:19:12.671058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.524 [2024-10-09 03:19:12.671071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:49.524 [2024-10-09 03:19:12.671091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.524 [2024-10-09 03:19:12.671104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:49.524 [2024-10-09 03:19:12.671123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.524 [2024-10-09 03:19:12.671137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:49.524 [2024-10-09 03:19:12.671171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.524 [2024-10-09 03:19:12.671188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:49.524 [2024-10-09 03:19:12.671207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.524 [2024-10-09 03:19:12.671221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:49.524 [2024-10-09 03:19:12.671240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.524 [2024-10-09 03:19:12.671254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:49.524 [2024-10-09 03:19:12.671274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.524 [2024-10-09 03:19:12.671288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:49.524 [2024-10-09 03:19:12.671307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.524 [2024-10-09 03:19:12.671320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:49.524 [2024-10-09 03:19:12.671356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.524 [2024-10-09 03:19:12.671370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:49.524 [2024-10-09 03:19:12.671390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.524 [2024-10-09 03:19:12.671404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:49.524 [2024-10-09 03:19:12.671424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.524 [2024-10-09 03:19:12.671438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:49.524 [2024-10-09 03:19:12.671458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.524 [2024-10-09 03:19:12.671489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:49.524 [2024-10-09 03:19:12.671510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.525 [2024-10-09 03:19:12.671525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.671545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.525 [2024-10-09 03:19:12.671559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.671585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:46792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.525 [2024-10-09 03:19:12.671599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.671619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:46800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.525 [2024-10-09 03:19:12.671633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.671653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.525 [2024-10-09 03:19:12.671667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.671687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.525 [2024-10-09 03:19:12.671701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.671721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:46824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.525 [2024-10-09 03:19:12.671735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.671755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.525 [2024-10-09 03:19:12.671769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.671789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:46840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.525 [2024-10-09 03:19:12.671804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.671824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:46848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.525 [2024-10-09 03:19:12.671838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.671858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.525 [2024-10-09 03:19:12.671872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.671892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.525 [2024-10-09 03:19:12.671912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.671933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.525 [2024-10-09 03:19:12.671947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.671967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.525 [2024-10-09 03:19:12.671981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.672001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.525 [2024-10-09 03:19:12.672015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.672034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.525 [2024-10-09 03:19:12.672049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.672096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.525 [2024-10-09 03:19:12.672113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.672134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.525 [2024-10-09 03:19:12.672149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.672185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.525 [2024-10-09 03:19:12.672204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.672225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:47440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.525 [2024-10-09 03:19:12.672240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.672261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:47448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.525 [2024-10-09 03:19:12.672276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.672296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.525 [2024-10-09 03:19:12.672310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.672331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:47464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.525 [2024-10-09 03:19:12.672345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.672366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.525 [2024-10-09 03:19:12.672380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.672412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:47480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.525 [2024-10-09 03:19:12.672428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.672449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:47488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.525 [2024-10-09 03:19:12.672464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.672499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:47496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.525 [2024-10-09 03:19:12.672514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.672534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.525 [2024-10-09 03:19:12.672548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.672568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.525 [2024-10-09 03:19:12.672582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.672602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.525 [2024-10-09 03:19:12.672616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.672636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.525 [2024-10-09 03:19:12.672650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.672670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.525 [2024-10-09 03:19:12.672684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.672704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:46872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.525 [2024-10-09 03:19:12.672718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.672739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.525 [2024-10-09 03:19:12.672753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.672773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:46888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.525 [2024-10-09 03:19:12.672787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.672807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.525 [2024-10-09 03:19:12.672821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:49.525 [2024-10-09 03:19:12.672848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:46904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.525 [2024-10-09 03:19:12.672863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.672883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.526 [2024-10-09 03:19:12.672897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.672917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:46920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.526 [2024-10-09 03:19:12.672931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.672951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:46928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.526 [2024-10-09 03:19:12.672972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.672994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.526 [2024-10-09 03:19:12.673008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.673028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.526 [2024-10-09 03:19:12.673042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.673074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:46952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.526 [2024-10-09 03:19:12.673099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.673120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:46960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.526 [2024-10-09 03:19:12.673134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.673155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.526 [2024-10-09 03:19:12.673169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.673189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.526 [2024-10-09 03:19:12.673203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.673223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:47528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.526 [2024-10-09 03:19:12.673237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.673256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.526 [2024-10-09 03:19:12.673271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.673291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:47544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.526 [2024-10-09 03:19:12.673327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.673348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:47552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.526 [2024-10-09 03:19:12.673362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.673385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:47560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.526 [2024-10-09 03:19:12.673400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.673420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:47568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.526 [2024-10-09 03:19:12.673434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.673454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:47576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.526 [2024-10-09 03:19:12.673467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.673487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:47584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.526 [2024-10-09 03:19:12.673501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.673520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:47592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.526 [2024-10-09 03:19:12.673534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.673553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:47600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.526 [2024-10-09 03:19:12.673572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.673591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.526 [2024-10-09 03:19:12.673605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.673624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:47616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.526 [2024-10-09 03:19:12.673638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.673657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.526 [2024-10-09 03:19:12.673675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.673695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:46992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.526 [2024-10-09 03:19:12.673709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.673729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.526 [2024-10-09 03:19:12.673752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.673773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:47008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.526 [2024-10-09 03:19:12.673787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.673807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:47016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.526 [2024-10-09 03:19:12.673821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.673841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:47024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.526 [2024-10-09 03:19:12.673854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.673873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.526 [2024-10-09 03:19:12.673887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.673907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:47040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.526 [2024-10-09 03:19:12.673921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.673940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:47048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.526 [2024-10-09 03:19:12.673954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.673973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:47056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.526 [2024-10-09 03:19:12.673987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.674006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:47064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.526 [2024-10-09 03:19:12.674020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.674076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:47072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.526 [2024-10-09 03:19:12.674094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.674117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.526 [2024-10-09 03:19:12.674132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.674153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:47088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.526 [2024-10-09 03:19:12.674173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.674195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:47096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.526 [2024-10-09 03:19:12.674210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.675050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:47104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.526 [2024-10-09 03:19:12.675076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.526 [2024-10-09 03:19:12.675108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:47624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.527 [2024-10-09 03:19:12.675130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:49.527 [2024-10-09 03:19:12.675172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:47632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.527 [2024-10-09 03:19:12.675188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:49.527 [2024-10-09 03:19:12.675215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:47640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.527 [2024-10-09 03:19:12.675229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:49.527 [2024-10-09 03:19:12.675255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.527 [2024-10-09 03:19:12.675270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:49.527 [2024-10-09 03:19:12.675296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:47656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.527 [2024-10-09 03:19:12.675310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:49.527 [2024-10-09 03:19:12.675336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:47664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.527 [2024-10-09 03:19:12.675350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:49.527 [2024-10-09 03:19:12.675377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:47672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.527 [2024-10-09 03:19:12.675391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:49.527 [2024-10-09 03:19:12.675431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.527 [2024-10-09 03:19:12.675450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:49.527 8556.56 IOPS, 33.42 MiB/s [2024-10-09T03:19:32.830Z] 8053.24 IOPS, 31.46 MiB/s [2024-10-09T03:19:32.830Z] 7605.83 IOPS, 29.71 MiB/s [2024-10-09T03:19:32.830Z] 7205.53 IOPS, 28.15 MiB/s [2024-10-09T03:19:32.830Z] 7075.80 IOPS, 27.64 MiB/s [2024-10-09T03:19:32.830Z] 7176.90 IOPS, 28.03 MiB/s [2024-10-09T03:19:32.830Z] 7263.05 IOPS, 28.37 MiB/s [2024-10-09T03:19:32.830Z] 7346.65 IOPS, 28.70 MiB/s [2024-10-09T03:19:32.830Z] 7427.62 IOPS, 29.01 MiB/s [2024-10-09T03:19:32.830Z] 7508.24 IOPS, 29.33 MiB/s [2024-10-09T03:19:32.830Z] 7608.81 IOPS, 29.72 MiB/s [2024-10-09T03:19:32.830Z] 7698.07 IOPS, 30.07 MiB/s [2024-10-09T03:19:32.830Z] 7776.68 IOPS, 30.38 MiB/s [2024-10-09T03:19:32.830Z] 7866.28 IOPS, 30.73 MiB/s [2024-10-09T03:19:32.830Z] 7969.77 IOPS, 31.13 MiB/s [2024-10-09T03:19:32.830Z] 8024.00 IOPS, 31.34 MiB/s [2024-10-09T03:19:32.830Z] 8080.53 IOPS, 31.56 MiB/s [2024-10-09T03:19:32.830Z] [2024-10-09 03:19:29.495341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.527 [2024-10-09 03:19:29.495464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:49.527 [2024-10-09 03:19:29.495544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.527 [2024-10-09 03:19:29.495599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:49.527 [2024-10-09 03:19:29.495623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.527 [2024-10-09 03:19:29.495638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:49.527 [2024-10-09 03:19:29.495657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.527 [2024-10-09 03:19:29.495671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:49.527 [2024-10-09 03:19:29.495690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.527 [2024-10-09 03:19:29.495704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:49.527 [2024-10-09 03:19:29.495723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.527 [2024-10-09 03:19:29.495752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:49.527 [2024-10-09 03:19:29.495770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.527 [2024-10-09 03:19:29.495783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:49.527 [2024-10-09 03:19:29.495802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.527 [2024-10-09 03:19:29.495815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:49.527 [2024-10-09 03:19:29.495834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.527 [2024-10-09 03:19:29.495847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:49.527 [2024-10-09 03:19:29.495866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.527 [2024-10-09 03:19:29.495879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:49.527 [2024-10-09 03:19:29.495897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.527 [2024-10-09 03:19:29.495910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:49.527 [2024-10-09 03:19:29.495929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.527 [2024-10-09 03:19:29.495942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:49.527 [2024-10-09 03:19:29.495961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.527 [2024-10-09 03:19:29.495974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:49.527 [2024-10-09 03:19:29.495993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.527 [2024-10-09 03:19:29.496015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:49.527 [2024-10-09 03:19:29.496035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.527 [2024-10-09 03:19:29.496049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:49.527 [2024-10-09 03:19:29.496068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.527 [2024-10-09 03:19:29.496095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:49.527 [2024-10-09 03:19:29.496116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:128888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.527 [2024-10-09 03:19:29.496130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:49.527 [2024-10-09 03:19:29.496153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.527 [2024-10-09 03:19:29.496168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:49.527 [2024-10-09 03:19:29.496186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.527 [2024-10-09 03:19:29.496200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:49.527 [2024-10-09 03:19:29.496219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.527 [2024-10-09 03:19:29.496249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:49.527 [2024-10-09 03:19:29.496269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.527 [2024-10-09 03:19:29.496283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:49.527 [2024-10-09 03:19:29.496303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.527 [2024-10-09 03:19:29.496317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:49.527 [2024-10-09 03:19:29.496337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.527 [2024-10-09 03:19:29.496351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:49.527 [2024-10-09 03:19:29.496371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:128944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.527 [2024-10-09 03:19:29.496386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:49.527 [2024-10-09 03:19:29.497434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.527 [2024-10-09 03:19:29.497465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:49.528 [2024-10-09 03:19:29.497491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.528 [2024-10-09 03:19:29.497507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:49.528 [2024-10-09 03:19:29.497556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.528 [2024-10-09 03:19:29.497571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:49.528 [2024-10-09 03:19:29.497591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.528 [2024-10-09 03:19:29.497604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:49.528 [2024-10-09 03:19:29.497623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.528 [2024-10-09 03:19:29.497637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:49.528 [2024-10-09 03:19:29.497656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.528 [2024-10-09 03:19:29.497670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:49.528 [2024-10-09 03:19:29.497689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.528 [2024-10-09 03:19:29.497703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:49.528 [2024-10-09 03:19:29.497725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.528 [2024-10-09 03:19:29.497739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:49.528 [2024-10-09 03:19:29.498785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.528 [2024-10-09 03:19:29.498813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:49.528 [2024-10-09 03:19:29.498840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:128672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.528 [2024-10-09 03:19:29.498871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:49.528 [2024-10-09 03:19:29.498890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:49.528 [2024-10-09 03:19:29.498904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:49.528 [2024-10-09 03:19:29.498923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.528 [2024-10-09 03:19:29.498937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:49.528 [2024-10-09 03:19:29.498956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.528 [2024-10-09 03:19:29.498970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:49.528 [2024-10-09 03:19:29.498988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.528 [2024-10-09 03:19:29.499002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:49.528 [2024-10-09 03:19:29.499034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.528 [2024-10-09 03:19:29.499049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:49.528 [2024-10-09 03:19:29.499067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.528 [2024-10-09 03:19:29.499081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:49.528 [2024-10-09 03:19:29.499100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.528 [2024-10-09 03:19:29.499127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:49.528 [2024-10-09 03:19:29.499148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.528 [2024-10-09 03:19:29.499162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:49.528 [2024-10-09 03:19:29.499181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.528 [2024-10-09 03:19:29.499195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:49.528 [2024-10-09 03:19:29.499214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:49.528 [2024-10-09 03:19:29.499228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:49.528 8127.45 IOPS, 31.75 MiB/s [2024-10-09T03:19:32.831Z] 8138.06 IOPS, 31.79 MiB/s [2024-10-09T03:19:32.831Z] 8139.14 IOPS, 31.79 MiB/s [2024-10-09T03:19:32.831Z] Received shutdown signal, test time was about 35.147158 seconds 00:16:49.528 00:16:49.528 Latency(us) 00:16:49.528 [2024-10-09T03:19:32.831Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.528 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:49.528 Verification LBA range: start 0x0 length 0x4000 00:16:49.528 Nvme0n1 : 35.15 8139.41 31.79 0.00 0.00 15695.38 144.29 4026531.84 00:16:49.528 [2024-10-09T03:19:32.831Z] =================================================================================================================== 00:16:49.528 [2024-10-09T03:19:32.831Z] Total : 8139.41 31.79 0.00 0.00 15695.38 144.29 4026531.84 00:16:49.528 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.787 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:16:49.787 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:49.787 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:16:49.787 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:49.787 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:16:49.787 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:49.787 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:16:49.787 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:49.787 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:49.787 rmmod nvme_tcp 00:16:49.787 rmmod nvme_fabrics 00:16:49.787 rmmod nvme_keyring 00:16:49.787 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:49.787 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:16:49.787 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:16:49.787 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 76414 ']' 00:16:49.787 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 76414 00:16:49.787 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 76414 ']' 00:16:49.787 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 76414 00:16:49.787 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:16:49.787 03:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:49.787 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76414 00:16:49.787 killing process with pid 76414 00:16:49.787 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:49.788 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:49.788 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76414' 00:16:49.788 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 76414 00:16:49.788 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 76414 00:16:50.046 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:50.046 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:50.046 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:50.046 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:16:50.047 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:50.047 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:16:50.047 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:16:50.047 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:50.047 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:50.047 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:50.047 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:50.047 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:50.306 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:50.306 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:50.306 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:50.306 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:50.306 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:50.306 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:50.306 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:50.306 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:50.306 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:50.306 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:50.306 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:50.306 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.306 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:50.306 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.306 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:16:50.306 00:16:50.306 real 0m41.996s 00:16:50.306 user 2m15.672s 00:16:50.306 sys 0m12.169s 00:16:50.306 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:50.306 ************************************ 00:16:50.306 END TEST nvmf_host_multipath_status 00:16:50.306 ************************************ 00:16:50.306 03:19:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:50.306 03:19:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:50.306 03:19:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:50.306 03:19:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:50.306 03:19:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.306 ************************************ 00:16:50.306 START TEST nvmf_discovery_remove_ifc 00:16:50.306 ************************************ 00:16:50.306 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:50.565 * Looking for test storage... 00:16:50.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:50.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.565 --rc genhtml_branch_coverage=1 00:16:50.565 --rc genhtml_function_coverage=1 00:16:50.565 --rc genhtml_legend=1 00:16:50.565 --rc geninfo_all_blocks=1 00:16:50.565 --rc geninfo_unexecuted_blocks=1 00:16:50.565 00:16:50.565 ' 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:50.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.565 --rc genhtml_branch_coverage=1 00:16:50.565 --rc genhtml_function_coverage=1 00:16:50.565 --rc genhtml_legend=1 00:16:50.565 --rc geninfo_all_blocks=1 00:16:50.565 --rc geninfo_unexecuted_blocks=1 00:16:50.565 00:16:50.565 ' 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:50.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.565 --rc genhtml_branch_coverage=1 00:16:50.565 --rc genhtml_function_coverage=1 00:16:50.565 --rc genhtml_legend=1 00:16:50.565 --rc geninfo_all_blocks=1 00:16:50.565 --rc geninfo_unexecuted_blocks=1 00:16:50.565 00:16:50.565 ' 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:50.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.565 --rc genhtml_branch_coverage=1 00:16:50.565 --rc genhtml_function_coverage=1 00:16:50.565 --rc genhtml_legend=1 00:16:50.565 --rc geninfo_all_blocks=1 00:16:50.565 --rc geninfo_unexecuted_blocks=1 00:16:50.565 00:16:50.565 ' 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.565 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:50.566 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@458 -- # nvmf_veth_init 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:50.566 Cannot find device "nvmf_init_br" 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:50.566 Cannot find device "nvmf_init_br2" 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:50.566 Cannot find device "nvmf_tgt_br" 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:16:50.566 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:50.825 Cannot find device "nvmf_tgt_br2" 00:16:50.825 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:16:50.825 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:50.825 Cannot find device "nvmf_init_br" 00:16:50.825 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:16:50.825 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:50.825 Cannot find device "nvmf_init_br2" 00:16:50.825 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:16:50.825 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:50.825 Cannot find device "nvmf_tgt_br" 00:16:50.825 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:16:50.825 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:50.825 Cannot find device "nvmf_tgt_br2" 00:16:50.825 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:16:50.825 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:50.825 Cannot find device "nvmf_br" 00:16:50.825 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:16:50.825 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:50.825 Cannot find device "nvmf_init_if" 00:16:50.825 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:16:50.825 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:50.825 Cannot find device "nvmf_init_if2" 00:16:50.825 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:16:50.825 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:50.826 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:50.826 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:16:50.826 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:50.826 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:50.826 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:16:50.826 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:50.826 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:50.826 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:50.826 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:50.826 03:19:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:50.826 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:50.826 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:50.826 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:50.826 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:50.826 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:50.826 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:50.826 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:50.826 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:50.826 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:50.826 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:50.826 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:50.826 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:50.826 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:51.084 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:51.084 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:51.084 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:51.084 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:51.084 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:51.084 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:51.084 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:51.085 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:51.085 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:51.085 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:51.085 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:51.085 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:51.085 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:51.085 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:51.085 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:51.085 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:51.085 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:16:51.085 00:16:51.085 --- 10.0.0.3 ping statistics --- 00:16:51.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.085 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:16:51.085 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:51.085 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:51.085 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:16:51.085 00:16:51.085 --- 10.0.0.4 ping statistics --- 00:16:51.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.085 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:16:51.085 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:51.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:51.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:16:51.085 00:16:51.085 --- 10.0.0.1 ping statistics --- 00:16:51.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.085 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:16:51.085 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:51.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:51.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:16:51.085 00:16:51.085 --- 10.0.0.2 ping statistics --- 00:16:51.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.085 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:16:51.085 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:51.085 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # return 0 00:16:51.085 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:51.085 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:51.085 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:51.085 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:51.085 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:51.085 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:51.085 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:51.085 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:51.085 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:51.085 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:51.085 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:51.085 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=77327 00:16:51.085 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:51.085 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 77327 00:16:51.085 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 77327 ']' 00:16:51.085 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.085 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:51.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.085 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.085 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:51.085 03:19:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:51.085 [2024-10-09 03:19:34.347543] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:16:51.085 [2024-10-09 03:19:34.347706] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.344 [2024-10-09 03:19:34.498196] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.344 [2024-10-09 03:19:34.621837] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.344 [2024-10-09 03:19:34.621910] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.344 [2024-10-09 03:19:34.621925] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:51.344 [2024-10-09 03:19:34.621936] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:51.344 [2024-10-09 03:19:34.621945] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.344 [2024-10-09 03:19:34.622525] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.603 [2024-10-09 03:19:34.699032] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:52.170 03:19:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:52.170 03:19:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:16:52.170 03:19:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:52.170 03:19:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:52.170 03:19:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:52.429 03:19:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.429 03:19:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:52.429 03:19:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.429 03:19:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:52.429 [2024-10-09 03:19:35.499391] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:52.429 [2024-10-09 03:19:35.507538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:16:52.429 null0 00:16:52.429 [2024-10-09 03:19:35.543424] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:52.429 03:19:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.429 03:19:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77360 00:16:52.429 03:19:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:52.429 03:19:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77360 /tmp/host.sock 00:16:52.429 03:19:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 77360 ']' 00:16:52.429 03:19:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:16:52.429 03:19:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:52.429 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:52.429 03:19:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:52.429 03:19:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:52.429 03:19:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:52.429 [2024-10-09 03:19:35.644044] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:16:52.429 [2024-10-09 03:19:35.644180] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77360 ] 00:16:52.688 [2024-10-09 03:19:35.782568] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.688 [2024-10-09 03:19:35.880743] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.624 03:19:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:53.624 03:19:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:16:53.624 03:19:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:53.624 03:19:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:53.624 03:19:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.624 03:19:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:53.624 03:19:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.624 03:19:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:53.624 03:19:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.624 03:19:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:53.624 [2024-10-09 03:19:36.716721] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:53.624 03:19:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.624 03:19:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:53.624 03:19:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.624 03:19:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:54.567 [2024-10-09 03:19:37.772927] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:54.567 [2024-10-09 03:19:37.772995] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:54.567 [2024-10-09 03:19:37.773015] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:54.567 [2024-10-09 03:19:37.778970] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:16:54.567 [2024-10-09 03:19:37.835917] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:54.567 [2024-10-09 03:19:37.835993] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:54.567 [2024-10-09 03:19:37.836021] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:54.567 [2024-10-09 03:19:37.836038] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:54.567 [2024-10-09 03:19:37.836104] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:54.567 03:19:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.567 03:19:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:54.567 03:19:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:54.567 [2024-10-09 03:19:37.841124] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xed1400 was disconnected and freed. delete nvme_qpair. 00:16:54.567 03:19:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:54.567 03:19:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.567 03:19:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:54.567 03:19:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:54.567 03:19:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:54.568 03:19:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:54.568 03:19:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.827 03:19:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:54.827 03:19:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:16:54.827 03:19:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:54.827 03:19:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:54.827 03:19:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:54.827 03:19:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:54.827 03:19:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:54.827 03:19:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.827 03:19:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:54.827 03:19:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:54.827 03:19:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:54.827 03:19:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.827 03:19:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:54.827 03:19:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:55.764 03:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:55.764 03:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:55.764 03:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:55.764 03:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.764 03:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:55.764 03:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:55.764 03:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:55.764 03:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.764 03:19:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:55.764 03:19:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:57.142 03:19:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:57.142 03:19:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:57.142 03:19:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:57.142 03:19:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.142 03:19:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:57.142 03:19:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:57.142 03:19:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:57.142 03:19:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.142 03:19:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:57.142 03:19:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:58.079 03:19:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:58.079 03:19:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:58.079 03:19:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:58.079 03:19:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.079 03:19:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:58.079 03:19:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:58.079 03:19:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:58.079 03:19:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.079 03:19:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:58.079 03:19:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:59.015 03:19:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:59.015 03:19:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:59.015 03:19:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.015 03:19:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:59.015 03:19:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:59.015 03:19:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:59.015 03:19:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:59.015 03:19:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.015 03:19:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:59.015 03:19:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:59.951 03:19:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:59.951 03:19:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:59.951 03:19:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:59.951 03:19:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.951 03:19:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:59.951 03:19:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:59.951 03:19:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:59.951 03:19:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.210 03:19:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:00.210 03:19:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:00.210 [2024-10-09 03:19:43.265339] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:17:00.210 [2024-10-09 03:19:43.265402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.210 [2024-10-09 03:19:43.265418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.210 [2024-10-09 03:19:43.265433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.210 [2024-10-09 03:19:43.265443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.210 [2024-10-09 03:19:43.265453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.210 [2024-10-09 03:19:43.265462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.210 [2024-10-09 03:19:43.265473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.210 [2024-10-09 03:19:43.265483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.210 [2024-10-09 03:19:43.265494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.210 [2024-10-09 03:19:43.265503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.210 [2024-10-09 03:19:43.265512] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4f70 is same with the state(6) to be set 00:17:00.210 [2024-10-09 03:19:43.275335] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea4f70 (9): Bad file descriptor 00:17:00.210 [2024-10-09 03:19:43.285365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:01.148 03:19:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:01.148 03:19:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:01.148 03:19:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:01.148 03:19:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.148 03:19:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:01.148 03:19:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:01.148 03:19:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:01.148 [2024-10-09 03:19:44.298173] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:17:01.148 [2024-10-09 03:19:44.298279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea4f70 with addr=10.0.0.3, port=4420 00:17:01.148 [2024-10-09 03:19:44.298317] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea4f70 is same with the state(6) to be set 00:17:01.148 [2024-10-09 03:19:44.298387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea4f70 (9): Bad file descriptor 00:17:01.148 [2024-10-09 03:19:44.299306] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:01.148 [2024-10-09 03:19:44.299390] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:01.148 [2024-10-09 03:19:44.299414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:01.148 [2024-10-09 03:19:44.299438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:01.148 [2024-10-09 03:19:44.299505] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:01.148 [2024-10-09 03:19:44.299534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:01.148 03:19:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.148 03:19:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:01.148 03:19:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:02.085 [2024-10-09 03:19:45.299606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:02.085 [2024-10-09 03:19:45.299674] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:02.085 [2024-10-09 03:19:45.299701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:02.085 [2024-10-09 03:19:45.299711] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:17:02.085 [2024-10-09 03:19:45.299735] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:02.085 [2024-10-09 03:19:45.299766] bdev_nvme.c:6904:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:17:02.085 [2024-10-09 03:19:45.299819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:02.085 [2024-10-09 03:19:45.299835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:02.085 [2024-10-09 03:19:45.299848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:02.085 [2024-10-09 03:19:45.299856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:02.085 [2024-10-09 03:19:45.299866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:02.085 [2024-10-09 03:19:45.299875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:02.085 [2024-10-09 03:19:45.299884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:02.085 [2024-10-09 03:19:45.299893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:02.085 [2024-10-09 03:19:45.299902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:02.085 [2024-10-09 03:19:45.299911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:02.085 [2024-10-09 03:19:45.299920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:17:02.085 [2024-10-09 03:19:45.299957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe39d70 (9): Bad file descriptor 00:17:02.085 [2024-10-09 03:19:45.300957] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:17:02.085 [2024-10-09 03:19:45.300976] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:17:02.085 03:19:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:02.085 03:19:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:02.085 03:19:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:02.085 03:19:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.085 03:19:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:02.085 03:19:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:02.085 03:19:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:02.085 03:19:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.344 03:19:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:17:02.344 03:19:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:02.344 03:19:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:02.344 03:19:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:17:02.344 03:19:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:02.344 03:19:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:02.344 03:19:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:02.344 03:19:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.344 03:19:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:02.344 03:19:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:02.344 03:19:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:02.344 03:19:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.344 03:19:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:02.344 03:19:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:03.280 03:19:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:03.280 03:19:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:03.280 03:19:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:03.280 03:19:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.280 03:19:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:03.280 03:19:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:03.280 03:19:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:03.280 03:19:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.280 03:19:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:03.280 03:19:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:04.222 [2024-10-09 03:19:47.307842] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:04.222 [2024-10-09 03:19:47.307879] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:04.222 [2024-10-09 03:19:47.307896] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:04.222 [2024-10-09 03:19:47.313879] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:17:04.222 [2024-10-09 03:19:47.370511] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:04.222 [2024-10-09 03:19:47.370698] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:04.222 [2024-10-09 03:19:47.370735] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:04.222 [2024-10-09 03:19:47.370752] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:17:04.222 [2024-10-09 03:19:47.370761] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:17:04.222 [2024-10-09 03:19:47.375883] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xeddc30 was disconnected and freed. delete nvme_qpair. 00:17:04.481 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:04.481 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:04.481 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:04.481 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:04.481 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.481 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:04.481 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:04.481 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.481 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:17:04.481 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:17:04.481 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77360 00:17:04.481 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 77360 ']' 00:17:04.481 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 77360 00:17:04.481 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:17:04.481 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:04.481 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77360 00:17:04.481 killing process with pid 77360 00:17:04.481 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:04.481 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:04.481 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77360' 00:17:04.482 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 77360 00:17:04.482 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 77360 00:17:04.741 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:17:04.741 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:04.741 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:17:04.741 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:04.741 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:17:04.741 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:04.741 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:04.741 rmmod nvme_tcp 00:17:04.741 rmmod nvme_fabrics 00:17:04.741 rmmod nvme_keyring 00:17:04.741 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:04.741 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:17:04.741 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:17:04.741 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 77327 ']' 00:17:04.741 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 77327 00:17:04.741 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 77327 ']' 00:17:04.741 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 77327 00:17:04.741 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:17:04.741 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:04.741 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77327 00:17:04.741 killing process with pid 77327 00:17:04.741 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:04.741 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:04.741 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77327' 00:17:04.741 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 77327 00:17:04.741 03:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 77327 00:17:05.000 03:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:05.000 03:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:05.000 03:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:05.000 03:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:17:05.000 03:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:17:05.000 03:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:05.000 03:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:17:05.000 03:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:05.000 03:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:05.000 03:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:05.000 03:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:05.258 03:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:05.258 03:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:05.258 03:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:05.258 03:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:05.258 03:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:05.258 03:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:05.258 03:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:05.258 03:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:05.258 03:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:05.258 03:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:05.258 03:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:05.258 03:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:05.258 03:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.258 03:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:05.258 03:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.258 03:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:17:05.258 00:17:05.258 real 0m14.932s 00:17:05.258 user 0m25.237s 00:17:05.258 sys 0m2.683s 00:17:05.258 03:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:05.258 03:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:05.258 ************************************ 00:17:05.258 END TEST nvmf_discovery_remove_ifc 00:17:05.258 ************************************ 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.518 ************************************ 00:17:05.518 START TEST nvmf_identify_kernel_target 00:17:05.518 ************************************ 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:05.518 * Looking for test storage... 00:17:05.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:05.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.518 --rc genhtml_branch_coverage=1 00:17:05.518 --rc genhtml_function_coverage=1 00:17:05.518 --rc genhtml_legend=1 00:17:05.518 --rc geninfo_all_blocks=1 00:17:05.518 --rc geninfo_unexecuted_blocks=1 00:17:05.518 00:17:05.518 ' 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:05.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.518 --rc genhtml_branch_coverage=1 00:17:05.518 --rc genhtml_function_coverage=1 00:17:05.518 --rc genhtml_legend=1 00:17:05.518 --rc geninfo_all_blocks=1 00:17:05.518 --rc geninfo_unexecuted_blocks=1 00:17:05.518 00:17:05.518 ' 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:05.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.518 --rc genhtml_branch_coverage=1 00:17:05.518 --rc genhtml_function_coverage=1 00:17:05.518 --rc genhtml_legend=1 00:17:05.518 --rc geninfo_all_blocks=1 00:17:05.518 --rc geninfo_unexecuted_blocks=1 00:17:05.518 00:17:05.518 ' 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:05.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.518 --rc genhtml_branch_coverage=1 00:17:05.518 --rc genhtml_function_coverage=1 00:17:05.518 --rc genhtml_legend=1 00:17:05.518 --rc geninfo_all_blocks=1 00:17:05.518 --rc geninfo_unexecuted_blocks=1 00:17:05.518 00:17:05.518 ' 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:17:05.518 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:05.519 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # nvmf_veth_init 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:05.519 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:05.778 Cannot find device "nvmf_init_br" 00:17:05.778 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:17:05.778 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:05.778 Cannot find device "nvmf_init_br2" 00:17:05.778 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:17:05.778 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:05.778 Cannot find device "nvmf_tgt_br" 00:17:05.778 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:17:05.778 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:05.778 Cannot find device "nvmf_tgt_br2" 00:17:05.778 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:17:05.778 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:05.778 Cannot find device "nvmf_init_br" 00:17:05.778 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:17:05.778 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:05.778 Cannot find device "nvmf_init_br2" 00:17:05.778 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:17:05.778 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:05.778 Cannot find device "nvmf_tgt_br" 00:17:05.778 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:17:05.778 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:05.778 Cannot find device "nvmf_tgt_br2" 00:17:05.778 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:17:05.778 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:05.778 Cannot find device "nvmf_br" 00:17:05.778 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:17:05.778 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:05.778 Cannot find device "nvmf_init_if" 00:17:05.778 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:17:05.778 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:05.778 Cannot find device "nvmf_init_if2" 00:17:05.778 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:17:05.778 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:05.778 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:05.778 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:17:05.778 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:05.778 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:05.778 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:17:05.778 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:05.778 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:05.778 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:05.778 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:05.778 03:19:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:05.778 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:05.778 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:05.778 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:05.778 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:05.778 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:06.037 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:06.037 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:06.037 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:06.037 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:06.037 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:06.037 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:06.037 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:06.037 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:06.037 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:06.037 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:06.037 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:06.037 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:06.037 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:06.037 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:06.037 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:06.037 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:06.037 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:06.037 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:06.037 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:06.037 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:06.037 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:06.037 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:06.037 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:06.037 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:06.037 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:17:06.037 00:17:06.037 --- 10.0.0.3 ping statistics --- 00:17:06.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.037 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:17:06.037 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:06.037 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:06.037 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:17:06.037 00:17:06.037 --- 10.0.0.4 ping statistics --- 00:17:06.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.037 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:17:06.037 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:06.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:06.038 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:17:06.038 00:17:06.038 --- 10.0.0.1 ping statistics --- 00:17:06.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.038 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:17:06.038 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:06.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:06.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:17:06.038 00:17:06.038 --- 10.0.0.2 ping statistics --- 00:17:06.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.038 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:17:06.038 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:06.038 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # return 0 00:17:06.038 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:06.038 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:06.038 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:06.038 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:06.038 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:06.038 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:06.038 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:06.038 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:17:06.038 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:17:06.038 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:17:06.038 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:06.038 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:06.038 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.038 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.038 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:06.038 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.038 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:06.038 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:06.038 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:06.038 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:17:06.038 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:17:06.038 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:17:06.038 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:17:06.038 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:06.038 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:06.038 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:06.038 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:17:06.038 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:17:06.038 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:17:06.038 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:06.038 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:06.606 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:06.606 Waiting for block devices as requested 00:17:06.606 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:06.606 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:06.606 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:17:06.606 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:06.606 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:17:06.606 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:17:06.606 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:06.606 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:06.606 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:17:06.606 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:17:06.606 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:06.865 No valid GPT data, bailing 00:17:06.865 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:06.865 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:06.865 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:06.865 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:17:06.865 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:17:06.865 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:06.865 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n2 00:17:06.865 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:17:06.865 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:06.865 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:06.865 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n2 00:17:06.865 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:17:06.865 03:19:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:06.865 No valid GPT data, bailing 00:17:06.865 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:06.865 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:06.865 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:06.865 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n2 00:17:06.865 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:17:06.865 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:06.865 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n3 00:17:06.865 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:17:06.865 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:06.865 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:06.865 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n3 00:17:06.865 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:17:06.865 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:06.865 No valid GPT data, bailing 00:17:06.865 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:06.865 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:06.865 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:06.865 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n3 00:17:06.865 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:17:06.865 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:06.865 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme1n1 00:17:06.865 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:17:06.865 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:06.865 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:06.865 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme1n1 00:17:06.865 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:17:06.865 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:07.124 No valid GPT data, bailing 00:17:07.124 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:07.124 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:07.124 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:07.124 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme1n1 00:17:07.124 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme1n1 ]] 00:17:07.124 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:07.124 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:07.124 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:07.124 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:17:07.124 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:17:07.124 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme1n1 00:17:07.124 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:17:07.124 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:17:07.124 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:17:07.124 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:17:07.124 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:17:07.124 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:07.124 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid=cb2c30f2-294c-46db-807f-ce0b3b357918 -a 10.0.0.1 -t tcp -s 4420 00:17:07.124 00:17:07.124 Discovery Log Number of Records 2, Generation counter 2 00:17:07.124 =====Discovery Log Entry 0====== 00:17:07.124 trtype: tcp 00:17:07.124 adrfam: ipv4 00:17:07.124 subtype: current discovery subsystem 00:17:07.124 treq: not specified, sq flow control disable supported 00:17:07.124 portid: 1 00:17:07.124 trsvcid: 4420 00:17:07.124 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:07.124 traddr: 10.0.0.1 00:17:07.124 eflags: none 00:17:07.124 sectype: none 00:17:07.124 =====Discovery Log Entry 1====== 00:17:07.124 trtype: tcp 00:17:07.124 adrfam: ipv4 00:17:07.124 subtype: nvme subsystem 00:17:07.124 treq: not specified, sq flow control disable supported 00:17:07.124 portid: 1 00:17:07.124 trsvcid: 4420 00:17:07.124 subnqn: nqn.2016-06.io.spdk:testnqn 00:17:07.124 traddr: 10.0.0.1 00:17:07.124 eflags: none 00:17:07.124 sectype: none 00:17:07.124 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:17:07.124 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:17:07.385 ===================================================== 00:17:07.385 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:07.385 ===================================================== 00:17:07.385 Controller Capabilities/Features 00:17:07.385 ================================ 00:17:07.385 Vendor ID: 0000 00:17:07.385 Subsystem Vendor ID: 0000 00:17:07.385 Serial Number: 23855483cd31fa3d9283 00:17:07.385 Model Number: Linux 00:17:07.385 Firmware Version: 6.8.9-20 00:17:07.385 Recommended Arb Burst: 0 00:17:07.385 IEEE OUI Identifier: 00 00 00 00:17:07.385 Multi-path I/O 00:17:07.385 May have multiple subsystem ports: No 00:17:07.385 May have multiple controllers: No 00:17:07.385 Associated with SR-IOV VF: No 00:17:07.385 Max Data Transfer Size: Unlimited 00:17:07.385 Max Number of Namespaces: 0 00:17:07.385 Max Number of I/O Queues: 1024 00:17:07.385 NVMe Specification Version (VS): 1.3 00:17:07.385 NVMe Specification Version (Identify): 1.3 00:17:07.385 Maximum Queue Entries: 1024 00:17:07.385 Contiguous Queues Required: No 00:17:07.385 Arbitration Mechanisms Supported 00:17:07.385 Weighted Round Robin: Not Supported 00:17:07.385 Vendor Specific: Not Supported 00:17:07.385 Reset Timeout: 7500 ms 00:17:07.385 Doorbell Stride: 4 bytes 00:17:07.385 NVM Subsystem Reset: Not Supported 00:17:07.385 Command Sets Supported 00:17:07.385 NVM Command Set: Supported 00:17:07.385 Boot Partition: Not Supported 00:17:07.385 Memory Page Size Minimum: 4096 bytes 00:17:07.385 Memory Page Size Maximum: 4096 bytes 00:17:07.385 Persistent Memory Region: Not Supported 00:17:07.385 Optional Asynchronous Events Supported 00:17:07.385 Namespace Attribute Notices: Not Supported 00:17:07.385 Firmware Activation Notices: Not Supported 00:17:07.385 ANA Change Notices: Not Supported 00:17:07.385 PLE Aggregate Log Change Notices: Not Supported 00:17:07.385 LBA Status Info Alert Notices: Not Supported 00:17:07.385 EGE Aggregate Log Change Notices: Not Supported 00:17:07.385 Normal NVM Subsystem Shutdown event: Not Supported 00:17:07.385 Zone Descriptor Change Notices: Not Supported 00:17:07.385 Discovery Log Change Notices: Supported 00:17:07.385 Controller Attributes 00:17:07.385 128-bit Host Identifier: Not Supported 00:17:07.385 Non-Operational Permissive Mode: Not Supported 00:17:07.385 NVM Sets: Not Supported 00:17:07.385 Read Recovery Levels: Not Supported 00:17:07.385 Endurance Groups: Not Supported 00:17:07.385 Predictable Latency Mode: Not Supported 00:17:07.385 Traffic Based Keep ALive: Not Supported 00:17:07.385 Namespace Granularity: Not Supported 00:17:07.385 SQ Associations: Not Supported 00:17:07.385 UUID List: Not Supported 00:17:07.385 Multi-Domain Subsystem: Not Supported 00:17:07.385 Fixed Capacity Management: Not Supported 00:17:07.385 Variable Capacity Management: Not Supported 00:17:07.385 Delete Endurance Group: Not Supported 00:17:07.385 Delete NVM Set: Not Supported 00:17:07.385 Extended LBA Formats Supported: Not Supported 00:17:07.385 Flexible Data Placement Supported: Not Supported 00:17:07.385 00:17:07.385 Controller Memory Buffer Support 00:17:07.385 ================================ 00:17:07.385 Supported: No 00:17:07.385 00:17:07.385 Persistent Memory Region Support 00:17:07.385 ================================ 00:17:07.385 Supported: No 00:17:07.385 00:17:07.385 Admin Command Set Attributes 00:17:07.385 ============================ 00:17:07.385 Security Send/Receive: Not Supported 00:17:07.385 Format NVM: Not Supported 00:17:07.385 Firmware Activate/Download: Not Supported 00:17:07.385 Namespace Management: Not Supported 00:17:07.385 Device Self-Test: Not Supported 00:17:07.385 Directives: Not Supported 00:17:07.385 NVMe-MI: Not Supported 00:17:07.385 Virtualization Management: Not Supported 00:17:07.385 Doorbell Buffer Config: Not Supported 00:17:07.385 Get LBA Status Capability: Not Supported 00:17:07.385 Command & Feature Lockdown Capability: Not Supported 00:17:07.385 Abort Command Limit: 1 00:17:07.385 Async Event Request Limit: 1 00:17:07.385 Number of Firmware Slots: N/A 00:17:07.385 Firmware Slot 1 Read-Only: N/A 00:17:07.385 Firmware Activation Without Reset: N/A 00:17:07.385 Multiple Update Detection Support: N/A 00:17:07.385 Firmware Update Granularity: No Information Provided 00:17:07.385 Per-Namespace SMART Log: No 00:17:07.385 Asymmetric Namespace Access Log Page: Not Supported 00:17:07.385 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:07.385 Command Effects Log Page: Not Supported 00:17:07.385 Get Log Page Extended Data: Supported 00:17:07.385 Telemetry Log Pages: Not Supported 00:17:07.385 Persistent Event Log Pages: Not Supported 00:17:07.385 Supported Log Pages Log Page: May Support 00:17:07.385 Commands Supported & Effects Log Page: Not Supported 00:17:07.385 Feature Identifiers & Effects Log Page:May Support 00:17:07.385 NVMe-MI Commands & Effects Log Page: May Support 00:17:07.385 Data Area 4 for Telemetry Log: Not Supported 00:17:07.385 Error Log Page Entries Supported: 1 00:17:07.385 Keep Alive: Not Supported 00:17:07.385 00:17:07.385 NVM Command Set Attributes 00:17:07.385 ========================== 00:17:07.385 Submission Queue Entry Size 00:17:07.385 Max: 1 00:17:07.385 Min: 1 00:17:07.385 Completion Queue Entry Size 00:17:07.385 Max: 1 00:17:07.385 Min: 1 00:17:07.385 Number of Namespaces: 0 00:17:07.385 Compare Command: Not Supported 00:17:07.385 Write Uncorrectable Command: Not Supported 00:17:07.385 Dataset Management Command: Not Supported 00:17:07.385 Write Zeroes Command: Not Supported 00:17:07.385 Set Features Save Field: Not Supported 00:17:07.385 Reservations: Not Supported 00:17:07.385 Timestamp: Not Supported 00:17:07.385 Copy: Not Supported 00:17:07.385 Volatile Write Cache: Not Present 00:17:07.385 Atomic Write Unit (Normal): 1 00:17:07.385 Atomic Write Unit (PFail): 1 00:17:07.385 Atomic Compare & Write Unit: 1 00:17:07.385 Fused Compare & Write: Not Supported 00:17:07.385 Scatter-Gather List 00:17:07.385 SGL Command Set: Supported 00:17:07.385 SGL Keyed: Not Supported 00:17:07.385 SGL Bit Bucket Descriptor: Not Supported 00:17:07.385 SGL Metadata Pointer: Not Supported 00:17:07.385 Oversized SGL: Not Supported 00:17:07.385 SGL Metadata Address: Not Supported 00:17:07.385 SGL Offset: Supported 00:17:07.385 Transport SGL Data Block: Not Supported 00:17:07.385 Replay Protected Memory Block: Not Supported 00:17:07.385 00:17:07.385 Firmware Slot Information 00:17:07.385 ========================= 00:17:07.385 Active slot: 0 00:17:07.385 00:17:07.385 00:17:07.385 Error Log 00:17:07.385 ========= 00:17:07.385 00:17:07.385 Active Namespaces 00:17:07.385 ================= 00:17:07.385 Discovery Log Page 00:17:07.385 ================== 00:17:07.385 Generation Counter: 2 00:17:07.385 Number of Records: 2 00:17:07.385 Record Format: 0 00:17:07.385 00:17:07.385 Discovery Log Entry 0 00:17:07.385 ---------------------- 00:17:07.385 Transport Type: 3 (TCP) 00:17:07.385 Address Family: 1 (IPv4) 00:17:07.385 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:07.385 Entry Flags: 00:17:07.385 Duplicate Returned Information: 0 00:17:07.385 Explicit Persistent Connection Support for Discovery: 0 00:17:07.385 Transport Requirements: 00:17:07.385 Secure Channel: Not Specified 00:17:07.385 Port ID: 1 (0x0001) 00:17:07.385 Controller ID: 65535 (0xffff) 00:17:07.385 Admin Max SQ Size: 32 00:17:07.385 Transport Service Identifier: 4420 00:17:07.385 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:07.385 Transport Address: 10.0.0.1 00:17:07.385 Discovery Log Entry 1 00:17:07.385 ---------------------- 00:17:07.385 Transport Type: 3 (TCP) 00:17:07.385 Address Family: 1 (IPv4) 00:17:07.385 Subsystem Type: 2 (NVM Subsystem) 00:17:07.385 Entry Flags: 00:17:07.385 Duplicate Returned Information: 0 00:17:07.385 Explicit Persistent Connection Support for Discovery: 0 00:17:07.385 Transport Requirements: 00:17:07.385 Secure Channel: Not Specified 00:17:07.385 Port ID: 1 (0x0001) 00:17:07.385 Controller ID: 65535 (0xffff) 00:17:07.385 Admin Max SQ Size: 32 00:17:07.385 Transport Service Identifier: 4420 00:17:07.385 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:17:07.386 Transport Address: 10.0.0.1 00:17:07.386 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:17:07.386 get_feature(0x01) failed 00:17:07.386 get_feature(0x02) failed 00:17:07.386 get_feature(0x04) failed 00:17:07.386 ===================================================== 00:17:07.386 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:17:07.386 ===================================================== 00:17:07.386 Controller Capabilities/Features 00:17:07.386 ================================ 00:17:07.386 Vendor ID: 0000 00:17:07.386 Subsystem Vendor ID: 0000 00:17:07.386 Serial Number: a8e52509de6420cfe0aa 00:17:07.386 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:17:07.386 Firmware Version: 6.8.9-20 00:17:07.386 Recommended Arb Burst: 6 00:17:07.386 IEEE OUI Identifier: 00 00 00 00:17:07.386 Multi-path I/O 00:17:07.386 May have multiple subsystem ports: Yes 00:17:07.386 May have multiple controllers: Yes 00:17:07.386 Associated with SR-IOV VF: No 00:17:07.386 Max Data Transfer Size: Unlimited 00:17:07.386 Max Number of Namespaces: 1024 00:17:07.386 Max Number of I/O Queues: 128 00:17:07.386 NVMe Specification Version (VS): 1.3 00:17:07.386 NVMe Specification Version (Identify): 1.3 00:17:07.386 Maximum Queue Entries: 1024 00:17:07.386 Contiguous Queues Required: No 00:17:07.386 Arbitration Mechanisms Supported 00:17:07.386 Weighted Round Robin: Not Supported 00:17:07.386 Vendor Specific: Not Supported 00:17:07.386 Reset Timeout: 7500 ms 00:17:07.386 Doorbell Stride: 4 bytes 00:17:07.386 NVM Subsystem Reset: Not Supported 00:17:07.386 Command Sets Supported 00:17:07.386 NVM Command Set: Supported 00:17:07.386 Boot Partition: Not Supported 00:17:07.386 Memory Page Size Minimum: 4096 bytes 00:17:07.386 Memory Page Size Maximum: 4096 bytes 00:17:07.386 Persistent Memory Region: Not Supported 00:17:07.386 Optional Asynchronous Events Supported 00:17:07.386 Namespace Attribute Notices: Supported 00:17:07.386 Firmware Activation Notices: Not Supported 00:17:07.386 ANA Change Notices: Supported 00:17:07.386 PLE Aggregate Log Change Notices: Not Supported 00:17:07.386 LBA Status Info Alert Notices: Not Supported 00:17:07.386 EGE Aggregate Log Change Notices: Not Supported 00:17:07.386 Normal NVM Subsystem Shutdown event: Not Supported 00:17:07.386 Zone Descriptor Change Notices: Not Supported 00:17:07.386 Discovery Log Change Notices: Not Supported 00:17:07.386 Controller Attributes 00:17:07.386 128-bit Host Identifier: Supported 00:17:07.386 Non-Operational Permissive Mode: Not Supported 00:17:07.386 NVM Sets: Not Supported 00:17:07.386 Read Recovery Levels: Not Supported 00:17:07.386 Endurance Groups: Not Supported 00:17:07.386 Predictable Latency Mode: Not Supported 00:17:07.386 Traffic Based Keep ALive: Supported 00:17:07.386 Namespace Granularity: Not Supported 00:17:07.386 SQ Associations: Not Supported 00:17:07.386 UUID List: Not Supported 00:17:07.386 Multi-Domain Subsystem: Not Supported 00:17:07.386 Fixed Capacity Management: Not Supported 00:17:07.386 Variable Capacity Management: Not Supported 00:17:07.386 Delete Endurance Group: Not Supported 00:17:07.386 Delete NVM Set: Not Supported 00:17:07.386 Extended LBA Formats Supported: Not Supported 00:17:07.386 Flexible Data Placement Supported: Not Supported 00:17:07.386 00:17:07.386 Controller Memory Buffer Support 00:17:07.386 ================================ 00:17:07.386 Supported: No 00:17:07.386 00:17:07.386 Persistent Memory Region Support 00:17:07.386 ================================ 00:17:07.386 Supported: No 00:17:07.386 00:17:07.386 Admin Command Set Attributes 00:17:07.386 ============================ 00:17:07.386 Security Send/Receive: Not Supported 00:17:07.386 Format NVM: Not Supported 00:17:07.386 Firmware Activate/Download: Not Supported 00:17:07.386 Namespace Management: Not Supported 00:17:07.386 Device Self-Test: Not Supported 00:17:07.386 Directives: Not Supported 00:17:07.386 NVMe-MI: Not Supported 00:17:07.386 Virtualization Management: Not Supported 00:17:07.386 Doorbell Buffer Config: Not Supported 00:17:07.386 Get LBA Status Capability: Not Supported 00:17:07.386 Command & Feature Lockdown Capability: Not Supported 00:17:07.386 Abort Command Limit: 4 00:17:07.386 Async Event Request Limit: 4 00:17:07.386 Number of Firmware Slots: N/A 00:17:07.386 Firmware Slot 1 Read-Only: N/A 00:17:07.386 Firmware Activation Without Reset: N/A 00:17:07.386 Multiple Update Detection Support: N/A 00:17:07.386 Firmware Update Granularity: No Information Provided 00:17:07.386 Per-Namespace SMART Log: Yes 00:17:07.386 Asymmetric Namespace Access Log Page: Supported 00:17:07.386 ANA Transition Time : 10 sec 00:17:07.386 00:17:07.386 Asymmetric Namespace Access Capabilities 00:17:07.386 ANA Optimized State : Supported 00:17:07.386 ANA Non-Optimized State : Supported 00:17:07.386 ANA Inaccessible State : Supported 00:17:07.386 ANA Persistent Loss State : Supported 00:17:07.386 ANA Change State : Supported 00:17:07.386 ANAGRPID is not changed : No 00:17:07.386 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:17:07.386 00:17:07.386 ANA Group Identifier Maximum : 128 00:17:07.386 Number of ANA Group Identifiers : 128 00:17:07.386 Max Number of Allowed Namespaces : 1024 00:17:07.386 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:17:07.386 Command Effects Log Page: Supported 00:17:07.386 Get Log Page Extended Data: Supported 00:17:07.386 Telemetry Log Pages: Not Supported 00:17:07.386 Persistent Event Log Pages: Not Supported 00:17:07.386 Supported Log Pages Log Page: May Support 00:17:07.386 Commands Supported & Effects Log Page: Not Supported 00:17:07.386 Feature Identifiers & Effects Log Page:May Support 00:17:07.386 NVMe-MI Commands & Effects Log Page: May Support 00:17:07.386 Data Area 4 for Telemetry Log: Not Supported 00:17:07.386 Error Log Page Entries Supported: 128 00:17:07.386 Keep Alive: Supported 00:17:07.386 Keep Alive Granularity: 1000 ms 00:17:07.386 00:17:07.386 NVM Command Set Attributes 00:17:07.386 ========================== 00:17:07.386 Submission Queue Entry Size 00:17:07.386 Max: 64 00:17:07.386 Min: 64 00:17:07.386 Completion Queue Entry Size 00:17:07.386 Max: 16 00:17:07.386 Min: 16 00:17:07.386 Number of Namespaces: 1024 00:17:07.386 Compare Command: Not Supported 00:17:07.386 Write Uncorrectable Command: Not Supported 00:17:07.386 Dataset Management Command: Supported 00:17:07.386 Write Zeroes Command: Supported 00:17:07.386 Set Features Save Field: Not Supported 00:17:07.386 Reservations: Not Supported 00:17:07.386 Timestamp: Not Supported 00:17:07.386 Copy: Not Supported 00:17:07.386 Volatile Write Cache: Present 00:17:07.386 Atomic Write Unit (Normal): 1 00:17:07.386 Atomic Write Unit (PFail): 1 00:17:07.386 Atomic Compare & Write Unit: 1 00:17:07.386 Fused Compare & Write: Not Supported 00:17:07.386 Scatter-Gather List 00:17:07.386 SGL Command Set: Supported 00:17:07.386 SGL Keyed: Not Supported 00:17:07.386 SGL Bit Bucket Descriptor: Not Supported 00:17:07.386 SGL Metadata Pointer: Not Supported 00:17:07.386 Oversized SGL: Not Supported 00:17:07.386 SGL Metadata Address: Not Supported 00:17:07.386 SGL Offset: Supported 00:17:07.386 Transport SGL Data Block: Not Supported 00:17:07.386 Replay Protected Memory Block: Not Supported 00:17:07.386 00:17:07.386 Firmware Slot Information 00:17:07.386 ========================= 00:17:07.386 Active slot: 0 00:17:07.386 00:17:07.386 Asymmetric Namespace Access 00:17:07.386 =========================== 00:17:07.386 Change Count : 0 00:17:07.386 Number of ANA Group Descriptors : 1 00:17:07.386 ANA Group Descriptor : 0 00:17:07.386 ANA Group ID : 1 00:17:07.386 Number of NSID Values : 1 00:17:07.386 Change Count : 0 00:17:07.386 ANA State : 1 00:17:07.386 Namespace Identifier : 1 00:17:07.386 00:17:07.386 Commands Supported and Effects 00:17:07.386 ============================== 00:17:07.386 Admin Commands 00:17:07.386 -------------- 00:17:07.386 Get Log Page (02h): Supported 00:17:07.386 Identify (06h): Supported 00:17:07.386 Abort (08h): Supported 00:17:07.386 Set Features (09h): Supported 00:17:07.386 Get Features (0Ah): Supported 00:17:07.386 Asynchronous Event Request (0Ch): Supported 00:17:07.386 Keep Alive (18h): Supported 00:17:07.386 I/O Commands 00:17:07.386 ------------ 00:17:07.386 Flush (00h): Supported 00:17:07.386 Write (01h): Supported LBA-Change 00:17:07.386 Read (02h): Supported 00:17:07.386 Write Zeroes (08h): Supported LBA-Change 00:17:07.386 Dataset Management (09h): Supported 00:17:07.386 00:17:07.386 Error Log 00:17:07.386 ========= 00:17:07.386 Entry: 0 00:17:07.386 Error Count: 0x3 00:17:07.386 Submission Queue Id: 0x0 00:17:07.386 Command Id: 0x5 00:17:07.386 Phase Bit: 0 00:17:07.386 Status Code: 0x2 00:17:07.386 Status Code Type: 0x0 00:17:07.386 Do Not Retry: 1 00:17:07.386 Error Location: 0x28 00:17:07.386 LBA: 0x0 00:17:07.386 Namespace: 0x0 00:17:07.387 Vendor Log Page: 0x0 00:17:07.387 ----------- 00:17:07.387 Entry: 1 00:17:07.387 Error Count: 0x2 00:17:07.387 Submission Queue Id: 0x0 00:17:07.387 Command Id: 0x5 00:17:07.387 Phase Bit: 0 00:17:07.387 Status Code: 0x2 00:17:07.387 Status Code Type: 0x0 00:17:07.387 Do Not Retry: 1 00:17:07.387 Error Location: 0x28 00:17:07.387 LBA: 0x0 00:17:07.387 Namespace: 0x0 00:17:07.387 Vendor Log Page: 0x0 00:17:07.387 ----------- 00:17:07.387 Entry: 2 00:17:07.387 Error Count: 0x1 00:17:07.387 Submission Queue Id: 0x0 00:17:07.387 Command Id: 0x4 00:17:07.387 Phase Bit: 0 00:17:07.387 Status Code: 0x2 00:17:07.387 Status Code Type: 0x0 00:17:07.387 Do Not Retry: 1 00:17:07.387 Error Location: 0x28 00:17:07.387 LBA: 0x0 00:17:07.387 Namespace: 0x0 00:17:07.387 Vendor Log Page: 0x0 00:17:07.387 00:17:07.387 Number of Queues 00:17:07.387 ================ 00:17:07.387 Number of I/O Submission Queues: 128 00:17:07.387 Number of I/O Completion Queues: 128 00:17:07.387 00:17:07.387 ZNS Specific Controller Data 00:17:07.387 ============================ 00:17:07.387 Zone Append Size Limit: 0 00:17:07.387 00:17:07.387 00:17:07.387 Active Namespaces 00:17:07.387 ================= 00:17:07.387 get_feature(0x05) failed 00:17:07.387 Namespace ID:1 00:17:07.387 Command Set Identifier: NVM (00h) 00:17:07.387 Deallocate: Supported 00:17:07.387 Deallocated/Unwritten Error: Not Supported 00:17:07.387 Deallocated Read Value: Unknown 00:17:07.387 Deallocate in Write Zeroes: Not Supported 00:17:07.387 Deallocated Guard Field: 0xFFFF 00:17:07.387 Flush: Supported 00:17:07.387 Reservation: Not Supported 00:17:07.387 Namespace Sharing Capabilities: Multiple Controllers 00:17:07.387 Size (in LBAs): 1310720 (5GiB) 00:17:07.387 Capacity (in LBAs): 1310720 (5GiB) 00:17:07.387 Utilization (in LBAs): 1310720 (5GiB) 00:17:07.387 UUID: 1dde3ba5-9ea0-47bc-9676-ede0a004345c 00:17:07.387 Thin Provisioning: Not Supported 00:17:07.387 Per-NS Atomic Units: Yes 00:17:07.387 Atomic Boundary Size (Normal): 0 00:17:07.387 Atomic Boundary Size (PFail): 0 00:17:07.387 Atomic Boundary Offset: 0 00:17:07.387 NGUID/EUI64 Never Reused: No 00:17:07.387 ANA group ID: 1 00:17:07.387 Namespace Write Protected: No 00:17:07.387 Number of LBA Formats: 1 00:17:07.387 Current LBA Format: LBA Format #00 00:17:07.387 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:17:07.387 00:17:07.387 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:17:07.387 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:07.387 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:17:07.646 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:07.646 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:17:07.646 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:07.646 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:07.646 rmmod nvme_tcp 00:17:07.646 rmmod nvme_fabrics 00:17:07.646 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:07.646 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:17:07.646 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:17:07.646 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:17:07.646 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:07.646 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:07.646 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:07.646 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:17:07.646 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:17:07.646 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:07.646 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:17:07.646 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:07.646 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:07.646 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:07.646 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:07.646 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:07.646 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:07.646 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:07.646 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:07.646 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:07.646 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:07.646 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:07.646 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:07.646 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:07.646 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:07.905 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:07.905 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:07.905 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.905 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:07.905 03:19:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.905 03:19:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:17:07.905 03:19:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:17:07.905 03:19:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:17:07.905 03:19:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:17:07.905 03:19:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:07.905 03:19:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:07.906 03:19:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:07.906 03:19:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:07.906 03:19:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:17:07.906 03:19:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:17:07.906 03:19:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:08.473 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:08.732 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:08.732 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:08.732 00:17:08.732 real 0m3.364s 00:17:08.732 user 0m1.139s 00:17:08.732 sys 0m1.544s 00:17:08.733 03:19:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:08.733 03:19:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.733 ************************************ 00:17:08.733 END TEST nvmf_identify_kernel_target 00:17:08.733 ************************************ 00:17:08.733 03:19:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:08.733 03:19:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:08.733 03:19:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:08.733 03:19:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.733 ************************************ 00:17:08.733 START TEST nvmf_auth_host 00:17:08.733 ************************************ 00:17:08.733 03:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:08.992 * Looking for test storage... 00:17:08.992 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:08.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.992 --rc genhtml_branch_coverage=1 00:17:08.992 --rc genhtml_function_coverage=1 00:17:08.992 --rc genhtml_legend=1 00:17:08.992 --rc geninfo_all_blocks=1 00:17:08.992 --rc geninfo_unexecuted_blocks=1 00:17:08.992 00:17:08.992 ' 00:17:08.992 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:08.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.992 --rc genhtml_branch_coverage=1 00:17:08.992 --rc genhtml_function_coverage=1 00:17:08.992 --rc genhtml_legend=1 00:17:08.992 --rc geninfo_all_blocks=1 00:17:08.993 --rc geninfo_unexecuted_blocks=1 00:17:08.993 00:17:08.993 ' 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:08.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.993 --rc genhtml_branch_coverage=1 00:17:08.993 --rc genhtml_function_coverage=1 00:17:08.993 --rc genhtml_legend=1 00:17:08.993 --rc geninfo_all_blocks=1 00:17:08.993 --rc geninfo_unexecuted_blocks=1 00:17:08.993 00:17:08.993 ' 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:08.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.993 --rc genhtml_branch_coverage=1 00:17:08.993 --rc genhtml_function_coverage=1 00:17:08.993 --rc genhtml_legend=1 00:17:08.993 --rc geninfo_all_blocks=1 00:17:08.993 --rc geninfo_unexecuted_blocks=1 00:17:08.993 00:17:08.993 ' 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:08.993 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # nvmf_veth_init 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:08.993 Cannot find device "nvmf_init_br" 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:08.993 Cannot find device "nvmf_init_br2" 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:08.993 Cannot find device "nvmf_tgt_br" 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:08.993 Cannot find device "nvmf_tgt_br2" 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:08.993 Cannot find device "nvmf_init_br" 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:08.993 Cannot find device "nvmf_init_br2" 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:17:08.993 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:08.993 Cannot find device "nvmf_tgt_br" 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:09.253 Cannot find device "nvmf_tgt_br2" 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:09.253 Cannot find device "nvmf_br" 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:09.253 Cannot find device "nvmf_init_if" 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:09.253 Cannot find device "nvmf_init_if2" 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:09.253 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:09.253 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:09.253 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:09.512 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:09.512 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:09.512 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:09.512 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:09.512 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:09.512 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:09.512 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:09.512 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:09.512 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:09.512 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:09.512 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:09.512 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:17:09.512 00:17:09.512 --- 10.0.0.3 ping statistics --- 00:17:09.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.512 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:17:09.512 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:09.513 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:09.513 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:17:09.513 00:17:09.513 --- 10.0.0.4 ping statistics --- 00:17:09.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.513 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:17:09.513 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:09.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:09.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:17:09.513 00:17:09.513 --- 10.0.0.1 ping statistics --- 00:17:09.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.513 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:17:09.513 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:09.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:09.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:17:09.513 00:17:09.513 --- 10.0.0.2 ping statistics --- 00:17:09.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.513 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:17:09.513 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:09.513 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # return 0 00:17:09.513 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:09.513 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:09.513 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:09.513 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:09.513 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:09.513 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:09.513 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:09.513 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:17:09.513 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:09.513 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:09.513 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.513 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=78355 00:17:09.513 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:17:09.513 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 78355 00:17:09.513 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 78355 ']' 00:17:09.513 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.513 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:09.513 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.513 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:09.513 03:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=bc0c80f97c2d270f3f3b568db448cc84 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.wwI 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key bc0c80f97c2d270f3f3b568db448cc84 0 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 bc0c80f97c2d270f3f3b568db448cc84 0 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=bc0c80f97c2d270f3f3b568db448cc84 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.wwI 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.wwI 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.wwI 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=14bb07334a88f45f833d2d7681145451ed929eb434482125892fe1af1dac79ee 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.3LB 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 14bb07334a88f45f833d2d7681145451ed929eb434482125892fe1af1dac79ee 3 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 14bb07334a88f45f833d2d7681145451ed929eb434482125892fe1af1dac79ee 3 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=14bb07334a88f45f833d2d7681145451ed929eb434482125892fe1af1dac79ee 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.3LB 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.3LB 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.3LB 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=cd86877a22b9131a03edebeabcc7be08d667554daad9d8e2 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.Jtz 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key cd86877a22b9131a03edebeabcc7be08d667554daad9d8e2 0 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 cd86877a22b9131a03edebeabcc7be08d667554daad9d8e2 0 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=cd86877a22b9131a03edebeabcc7be08d667554daad9d8e2 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:17:10.891 03:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:17:10.891 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.Jtz 00:17:10.891 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.Jtz 00:17:10.891 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Jtz 00:17:10.891 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:17:10.891 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:17:10.891 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:10.891 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:17:10.891 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:17:10.891 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:17:10.891 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:10.891 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=32360f7ec4138a58ac457820cb93fc1fc376977859463c54 00:17:10.891 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:17:10.891 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.2rY 00:17:10.891 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 32360f7ec4138a58ac457820cb93fc1fc376977859463c54 2 00:17:10.891 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 32360f7ec4138a58ac457820cb93fc1fc376977859463c54 2 00:17:10.891 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:17:10.891 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=32360f7ec4138a58ac457820cb93fc1fc376977859463c54 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.2rY 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.2rY 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.2rY 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=8818b0dba8a860bca60718506a201279 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.1WK 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 8818b0dba8a860bca60718506a201279 1 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 8818b0dba8a860bca60718506a201279 1 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=8818b0dba8a860bca60718506a201279 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.1WK 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.1WK 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.1WK 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=941fc91b71063b9fcd897dad1690c006 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.GAy 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 941fc91b71063b9fcd897dad1690c006 1 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 941fc91b71063b9fcd897dad1690c006 1 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=941fc91b71063b9fcd897dad1690c006 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:17:10.892 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.GAy 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.GAy 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.GAy 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=3e8efc3b81da2a8dea82223aef55054e216fd2f4eec808de 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.xHT 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 3e8efc3b81da2a8dea82223aef55054e216fd2f4eec808de 2 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 3e8efc3b81da2a8dea82223aef55054e216fd2f4eec808de 2 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=3e8efc3b81da2a8dea82223aef55054e216fd2f4eec808de 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.xHT 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.xHT 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.xHT 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=4d59b605df3cedafea6edb4b51329560 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.dQN 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 4d59b605df3cedafea6edb4b51329560 0 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 4d59b605df3cedafea6edb4b51329560 0 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=4d59b605df3cedafea6edb4b51329560 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.dQN 00:17:11.151 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.dQN 00:17:11.152 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.dQN 00:17:11.152 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:17:11.152 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:17:11.152 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:11.152 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:17:11.152 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:17:11.152 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:17:11.152 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:11.152 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=73e9f849bd46c9a588a4433120ad280c9893a687e64249dc2abff5b1118ae017 00:17:11.152 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:17:11.152 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.VZf 00:17:11.152 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 73e9f849bd46c9a588a4433120ad280c9893a687e64249dc2abff5b1118ae017 3 00:17:11.152 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 73e9f849bd46c9a588a4433120ad280c9893a687e64249dc2abff5b1118ae017 3 00:17:11.152 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:17:11.152 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:11.152 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=73e9f849bd46c9a588a4433120ad280c9893a687e64249dc2abff5b1118ae017 00:17:11.152 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:17:11.152 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:17:11.152 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.VZf 00:17:11.152 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.VZf 00:17:11.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.152 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.VZf 00:17:11.152 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:17:11.152 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78355 00:17:11.152 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 78355 ']' 00:17:11.152 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.152 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:11.152 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.152 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:11.152 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.wwI 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.3LB ]] 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3LB 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Jtz 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.2rY ]] 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2rY 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.1WK 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.GAy ]] 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.GAy 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.xHT 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.dQN ]] 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.dQN 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.VZf 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:11.720 03:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:11.980 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:11.980 Waiting for block devices as requested 00:17:11.980 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:12.250 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:12.854 03:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:17:12.854 03:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:12.854 03:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:17:12.854 03:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:17:12.854 03:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:12.854 03:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:12.854 03:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:17:12.854 03:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:17:12.854 03:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:12.854 No valid GPT data, bailing 00:17:12.854 03:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:12.854 03:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:12.854 03:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:12.854 03:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:17:12.854 03:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:17:12.854 03:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:12.854 03:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n2 00:17:12.854 03:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:17:12.854 03:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:12.854 03:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:12.854 03:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n2 00:17:12.854 03:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:17:12.854 03:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:12.854 No valid GPT data, bailing 00:17:12.854 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:12.854 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:12.854 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:12.854 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n2 00:17:12.854 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:17:12.854 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:12.854 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n3 00:17:12.854 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:17:12.854 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:12.854 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:12.854 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n3 00:17:12.854 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:17:12.854 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:12.854 No valid GPT data, bailing 00:17:12.854 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:12.854 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:12.854 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:12.854 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n3 00:17:12.854 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:17:12.854 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:12.854 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme1n1 00:17:12.854 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:17:12.854 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:12.854 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:12.854 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme1n1 00:17:12.854 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:17:12.854 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:13.112 No valid GPT data, bailing 00:17:13.112 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:13.112 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:13.112 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:13.112 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme1n1 00:17:13.112 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme1n1 ]] 00:17:13.112 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:13.112 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:13.112 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:13.112 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:17:13.112 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:17:13.112 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme1n1 00:17:13.112 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:17:13.112 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:17:13.112 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:17:13.112 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:17:13.112 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:17:13.112 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:13.112 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid=cb2c30f2-294c-46db-807f-ce0b3b357918 -a 10.0.0.1 -t tcp -s 4420 00:17:13.113 00:17:13.113 Discovery Log Number of Records 2, Generation counter 2 00:17:13.113 =====Discovery Log Entry 0====== 00:17:13.113 trtype: tcp 00:17:13.113 adrfam: ipv4 00:17:13.113 subtype: current discovery subsystem 00:17:13.113 treq: not specified, sq flow control disable supported 00:17:13.113 portid: 1 00:17:13.113 trsvcid: 4420 00:17:13.113 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:13.113 traddr: 10.0.0.1 00:17:13.113 eflags: none 00:17:13.113 sectype: none 00:17:13.113 =====Discovery Log Entry 1====== 00:17:13.113 trtype: tcp 00:17:13.113 adrfam: ipv4 00:17:13.113 subtype: nvme subsystem 00:17:13.113 treq: not specified, sq flow control disable supported 00:17:13.113 portid: 1 00:17:13.113 trsvcid: 4420 00:17:13.113 subnqn: nqn.2024-02.io.spdk:cnode0 00:17:13.113 traddr: 10.0.0.1 00:17:13.113 eflags: none 00:17:13.113 sectype: none 00:17:13.113 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:13.113 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:17:13.113 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:13.113 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:13.113 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.113 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:13.113 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:13.113 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:13.113 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:13.113 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:13.113 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:13.113 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:13.113 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:13.113 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: ]] 00:17:13.113 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:13.113 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:13.113 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:17:13.113 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:13.113 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:13.113 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:17:13.113 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:13.113 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:17:13.113 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:13.113 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:13.113 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:13.113 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:13.113 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.113 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.372 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.372 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:13.372 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:13.372 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:13.372 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:13.372 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.372 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.372 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:13.372 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.372 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:13.372 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:13.372 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:13.372 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.372 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.372 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.372 nvme0n1 00:17:13.372 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.372 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:13.372 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.372 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.372 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.372 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.372 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.372 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.372 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.372 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.372 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.372 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:13.372 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:13.372 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:13.372 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:17:13.372 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.372 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:13.373 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:13.373 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:13.373 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmMwYzgwZjk3YzJkMjcwZjNmM2I1NjhkYjQ0OGNjODS/yE/D: 00:17:13.373 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: 00:17:13.373 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:13.373 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:13.373 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmMwYzgwZjk3YzJkMjcwZjNmM2I1NjhkYjQ0OGNjODS/yE/D: 00:17:13.373 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: ]] 00:17:13.373 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: 00:17:13.373 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:17:13.373 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:13.373 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:13.373 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:13.373 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:13.373 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:13.373 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:13.373 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.373 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.373 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.373 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:13.373 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:13.373 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:13.373 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:13.373 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.373 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.373 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:13.373 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.373 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:13.373 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:13.373 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:13.373 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.373 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.373 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.633 nvme0n1 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: ]] 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.633 nvme0n1 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.633 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODgxOGIwZGJhOGE4NjBiY2E2MDcxODUwNmEyMDEyNzkPeiUc: 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODgxOGIwZGJhOGE4NjBiY2E2MDcxODUwNmEyMDEyNzkPeiUc: 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: ]] 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.893 03:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.893 nvme0n1 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2U4ZWZjM2I4MWRhMmE4ZGVhODIyMjNhZWY1NTA1NGUyMTZmZDJmNGVlYzgwOGRlylHk+A==: 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2U4ZWZjM2I4MWRhMmE4ZGVhODIyMjNhZWY1NTA1NGUyMTZmZDJmNGVlYzgwOGRlylHk+A==: 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: ]] 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.893 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.152 nvme0n1 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzNlOWY4NDliZDQ2YzlhNTg4YTQ0MzMxMjBhZDI4MGM5ODkzYTY4N2U2NDI0OWRjMmFiZmY1YjExMThhZTAxN0XI248=: 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzNlOWY4NDliZDQ2YzlhNTg4YTQ0MzMxMjBhZDI4MGM5ODkzYTY4N2U2NDI0OWRjMmFiZmY1YjExMThhZTAxN0XI248=: 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.152 nvme0n1 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.152 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.411 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.411 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.411 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.411 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.411 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.411 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:14.411 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.411 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:17:14.411 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.412 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:14.412 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:14.412 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:14.412 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmMwYzgwZjk3YzJkMjcwZjNmM2I1NjhkYjQ0OGNjODS/yE/D: 00:17:14.412 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: 00:17:14.412 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:14.412 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:14.671 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmMwYzgwZjk3YzJkMjcwZjNmM2I1NjhkYjQ0OGNjODS/yE/D: 00:17:14.671 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: ]] 00:17:14.671 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: 00:17:14.671 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:17:14.671 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.671 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:14.671 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:14.671 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:14.671 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.671 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:14.671 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.671 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.671 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.671 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.671 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:14.671 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:14.671 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:14.671 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.671 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.671 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:14.671 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.671 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:14.671 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:14.671 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:14.671 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.671 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.671 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.671 nvme0n1 00:17:14.671 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.671 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.671 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.671 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.671 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.671 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.930 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.930 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.930 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.930 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.930 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.930 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.930 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:17:14.930 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.930 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:14.930 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:14.931 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:14.931 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:14.931 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:14.931 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:14.931 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:14.931 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:14.931 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: ]] 00:17:14.931 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:14.931 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:17:14.931 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.931 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:14.931 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:14.931 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:14.931 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.931 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:14.931 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.931 03:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.931 nvme0n1 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODgxOGIwZGJhOGE4NjBiY2E2MDcxODUwNmEyMDEyNzkPeiUc: 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODgxOGIwZGJhOGE4NjBiY2E2MDcxODUwNmEyMDEyNzkPeiUc: 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: ]] 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.931 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.191 nvme0n1 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2U4ZWZjM2I4MWRhMmE4ZGVhODIyMjNhZWY1NTA1NGUyMTZmZDJmNGVlYzgwOGRlylHk+A==: 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2U4ZWZjM2I4MWRhMmE4ZGVhODIyMjNhZWY1NTA1NGUyMTZmZDJmNGVlYzgwOGRlylHk+A==: 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: ]] 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.191 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.451 nvme0n1 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzNlOWY4NDliZDQ2YzlhNTg4YTQ0MzMxMjBhZDI4MGM5ODkzYTY4N2U2NDI0OWRjMmFiZmY1YjExMThhZTAxN0XI248=: 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzNlOWY4NDliZDQ2YzlhNTg4YTQ0MzMxMjBhZDI4MGM5ODkzYTY4N2U2NDI0OWRjMmFiZmY1YjExMThhZTAxN0XI248=: 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.451 nvme0n1 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.451 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.710 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.710 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.710 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.710 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.710 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.710 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.710 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:15.710 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.710 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:17:15.710 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.710 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:15.710 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:15.710 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:15.710 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmMwYzgwZjk3YzJkMjcwZjNmM2I1NjhkYjQ0OGNjODS/yE/D: 00:17:15.710 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: 00:17:15.710 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:15.710 03:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:16.278 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmMwYzgwZjk3YzJkMjcwZjNmM2I1NjhkYjQ0OGNjODS/yE/D: 00:17:16.278 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: ]] 00:17:16.278 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: 00:17:16.278 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:17:16.278 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.278 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:16.278 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:16.278 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:16.278 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.278 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:16.278 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.278 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.278 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.278 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.278 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:16.278 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:16.278 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:16.278 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.278 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.278 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:16.278 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.278 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:16.279 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:16.279 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:16.279 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.279 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.279 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.279 nvme0n1 00:17:16.279 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.279 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.279 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.279 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.279 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.537 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.537 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.537 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.537 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.537 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.537 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.537 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: ]] 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.538 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.797 nvme0n1 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODgxOGIwZGJhOGE4NjBiY2E2MDcxODUwNmEyMDEyNzkPeiUc: 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODgxOGIwZGJhOGE4NjBiY2E2MDcxODUwNmEyMDEyNzkPeiUc: 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: ]] 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.797 03:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.057 nvme0n1 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2U4ZWZjM2I4MWRhMmE4ZGVhODIyMjNhZWY1NTA1NGUyMTZmZDJmNGVlYzgwOGRlylHk+A==: 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2U4ZWZjM2I4MWRhMmE4ZGVhODIyMjNhZWY1NTA1NGUyMTZmZDJmNGVlYzgwOGRlylHk+A==: 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: ]] 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.057 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.316 nvme0n1 00:17:17.316 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.316 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.316 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.316 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.316 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.316 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.316 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.316 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.316 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.316 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.316 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.316 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.316 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:17:17.316 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.316 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:17.316 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:17.316 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:17.316 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzNlOWY4NDliZDQ2YzlhNTg4YTQ0MzMxMjBhZDI4MGM5ODkzYTY4N2U2NDI0OWRjMmFiZmY1YjExMThhZTAxN0XI248=: 00:17:17.316 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:17.316 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:17.316 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:17.316 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzNlOWY4NDliZDQ2YzlhNTg4YTQ0MzMxMjBhZDI4MGM5ODkzYTY4N2U2NDI0OWRjMmFiZmY1YjExMThhZTAxN0XI248=: 00:17:17.316 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:17.316 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:17:17.316 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.316 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:17.316 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:17.316 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:17.316 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.316 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:17.316 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.317 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.317 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.317 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.317 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:17.317 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:17.317 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:17.317 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.317 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.317 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:17.317 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.317 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:17.317 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:17.317 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:17.317 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:17.317 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.317 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.576 nvme0n1 00:17:17.576 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.576 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.576 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.576 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.576 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.576 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.576 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.576 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.576 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.576 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.576 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.576 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:17.576 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.576 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:17:17.576 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.576 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:17.576 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:17.576 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:17.576 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmMwYzgwZjk3YzJkMjcwZjNmM2I1NjhkYjQ0OGNjODS/yE/D: 00:17:17.576 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: 00:17:17.576 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:17.576 03:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmMwYzgwZjk3YzJkMjcwZjNmM2I1NjhkYjQ0OGNjODS/yE/D: 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: ]] 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.481 nvme0n1 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: ]] 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.481 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:19.482 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:19.482 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:19.482 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.482 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:19.482 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.482 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.482 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.482 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.482 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:19.482 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:19.482 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:19.482 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.482 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.482 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:19.482 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.482 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:19.482 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:19.482 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:19.482 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.482 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.482 03:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.051 nvme0n1 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODgxOGIwZGJhOGE4NjBiY2E2MDcxODUwNmEyMDEyNzkPeiUc: 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODgxOGIwZGJhOGE4NjBiY2E2MDcxODUwNmEyMDEyNzkPeiUc: 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: ]] 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.051 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.310 nvme0n1 00:17:20.310 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.310 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.310 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.310 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.310 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.310 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.310 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2U4ZWZjM2I4MWRhMmE4ZGVhODIyMjNhZWY1NTA1NGUyMTZmZDJmNGVlYzgwOGRlylHk+A==: 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2U4ZWZjM2I4MWRhMmE4ZGVhODIyMjNhZWY1NTA1NGUyMTZmZDJmNGVlYzgwOGRlylHk+A==: 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: ]] 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.311 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.570 nvme0n1 00:17:20.570 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.570 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.570 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.570 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.570 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.570 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzNlOWY4NDliZDQ2YzlhNTg4YTQ0MzMxMjBhZDI4MGM5ODkzYTY4N2U2NDI0OWRjMmFiZmY1YjExMThhZTAxN0XI248=: 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzNlOWY4NDliZDQ2YzlhNTg4YTQ0MzMxMjBhZDI4MGM5ODkzYTY4N2U2NDI0OWRjMmFiZmY1YjExMThhZTAxN0XI248=: 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.829 03:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.088 nvme0n1 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmMwYzgwZjk3YzJkMjcwZjNmM2I1NjhkYjQ0OGNjODS/yE/D: 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmMwYzgwZjk3YzJkMjcwZjNmM2I1NjhkYjQ0OGNjODS/yE/D: 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: ]] 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.088 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.089 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.025 nvme0n1 00:17:22.025 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.025 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:22.025 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:22.025 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.025 03:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.025 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.025 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: ]] 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.026 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.594 nvme0n1 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODgxOGIwZGJhOGE4NjBiY2E2MDcxODUwNmEyMDEyNzkPeiUc: 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODgxOGIwZGJhOGE4NjBiY2E2MDcxODUwNmEyMDEyNzkPeiUc: 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: ]] 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.594 03:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.530 nvme0n1 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2U4ZWZjM2I4MWRhMmE4ZGVhODIyMjNhZWY1NTA1NGUyMTZmZDJmNGVlYzgwOGRlylHk+A==: 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2U4ZWZjM2I4MWRhMmE4ZGVhODIyMjNhZWY1NTA1NGUyMTZmZDJmNGVlYzgwOGRlylHk+A==: 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: ]] 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.530 03:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.107 nvme0n1 00:17:24.107 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.107 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.107 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.107 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.107 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.107 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzNlOWY4NDliZDQ2YzlhNTg4YTQ0MzMxMjBhZDI4MGM5ODkzYTY4N2U2NDI0OWRjMmFiZmY1YjExMThhZTAxN0XI248=: 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzNlOWY4NDliZDQ2YzlhNTg4YTQ0MzMxMjBhZDI4MGM5ODkzYTY4N2U2NDI0OWRjMmFiZmY1YjExMThhZTAxN0XI248=: 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.108 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.674 nvme0n1 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmMwYzgwZjk3YzJkMjcwZjNmM2I1NjhkYjQ0OGNjODS/yE/D: 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmMwYzgwZjk3YzJkMjcwZjNmM2I1NjhkYjQ0OGNjODS/yE/D: 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: ]] 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:24.674 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:24.675 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.675 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.675 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.675 nvme0n1 00:17:24.675 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.675 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.675 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.933 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.933 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.933 03:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: ]] 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.933 nvme0n1 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.933 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.934 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.934 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.934 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.934 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.934 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:17:24.934 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.934 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:24.934 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:24.934 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:24.934 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODgxOGIwZGJhOGE4NjBiY2E2MDcxODUwNmEyMDEyNzkPeiUc: 00:17:24.934 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: 00:17:24.934 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:24.934 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:24.934 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODgxOGIwZGJhOGE4NjBiY2E2MDcxODUwNmEyMDEyNzkPeiUc: 00:17:24.934 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: ]] 00:17:24.934 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: 00:17:24.934 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:17:24.934 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.934 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:24.934 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:24.934 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:24.934 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.934 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:24.934 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.934 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.192 nvme0n1 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2U4ZWZjM2I4MWRhMmE4ZGVhODIyMjNhZWY1NTA1NGUyMTZmZDJmNGVlYzgwOGRlylHk+A==: 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2U4ZWZjM2I4MWRhMmE4ZGVhODIyMjNhZWY1NTA1NGUyMTZmZDJmNGVlYzgwOGRlylHk+A==: 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: ]] 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.192 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.451 nvme0n1 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzNlOWY4NDliZDQ2YzlhNTg4YTQ0MzMxMjBhZDI4MGM5ODkzYTY4N2U2NDI0OWRjMmFiZmY1YjExMThhZTAxN0XI248=: 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzNlOWY4NDliZDQ2YzlhNTg4YTQ0MzMxMjBhZDI4MGM5ODkzYTY4N2U2NDI0OWRjMmFiZmY1YjExMThhZTAxN0XI248=: 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.451 nvme0n1 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.451 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.710 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.710 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.710 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.710 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.710 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.710 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmMwYzgwZjk3YzJkMjcwZjNmM2I1NjhkYjQ0OGNjODS/yE/D: 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmMwYzgwZjk3YzJkMjcwZjNmM2I1NjhkYjQ0OGNjODS/yE/D: 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: ]] 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.711 nvme0n1 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: ]] 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:25.711 03:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:25.711 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.711 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.711 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.970 nvme0n1 00:17:25.970 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.970 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.970 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.970 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.970 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.970 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.970 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.970 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.970 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.970 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.970 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODgxOGIwZGJhOGE4NjBiY2E2MDcxODUwNmEyMDEyNzkPeiUc: 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODgxOGIwZGJhOGE4NjBiY2E2MDcxODUwNmEyMDEyNzkPeiUc: 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: ]] 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.971 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.230 nvme0n1 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2U4ZWZjM2I4MWRhMmE4ZGVhODIyMjNhZWY1NTA1NGUyMTZmZDJmNGVlYzgwOGRlylHk+A==: 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2U4ZWZjM2I4MWRhMmE4ZGVhODIyMjNhZWY1NTA1NGUyMTZmZDJmNGVlYzgwOGRlylHk+A==: 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: ]] 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:26.230 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.231 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:26.231 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:26.231 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:26.231 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:26.231 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.231 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.231 nvme0n1 00:17:26.231 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.231 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.231 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.231 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.231 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.490 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.490 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.490 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.490 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.490 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.490 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.490 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.490 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:17:26.490 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.490 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:26.490 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:26.490 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:26.490 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzNlOWY4NDliZDQ2YzlhNTg4YTQ0MzMxMjBhZDI4MGM5ODkzYTY4N2U2NDI0OWRjMmFiZmY1YjExMThhZTAxN0XI248=: 00:17:26.490 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:26.490 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:26.490 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:26.490 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzNlOWY4NDliZDQ2YzlhNTg4YTQ0MzMxMjBhZDI4MGM5ODkzYTY4N2U2NDI0OWRjMmFiZmY1YjExMThhZTAxN0XI248=: 00:17:26.490 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:26.490 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:17:26.490 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.490 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:26.490 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:26.490 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:26.490 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.490 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:26.490 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.490 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.490 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.490 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.490 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:26.491 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:26.491 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:26.491 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.491 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.491 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:26.491 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.491 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:26.491 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:26.491 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:26.491 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:26.491 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.491 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.491 nvme0n1 00:17:26.491 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.491 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.491 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.491 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.491 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.491 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.491 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.491 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.491 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.491 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmMwYzgwZjk3YzJkMjcwZjNmM2I1NjhkYjQ0OGNjODS/yE/D: 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmMwYzgwZjk3YzJkMjcwZjNmM2I1NjhkYjQ0OGNjODS/yE/D: 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: ]] 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.750 nvme0n1 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.750 03:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.750 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.750 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.750 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.750 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.010 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.010 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.010 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:17:27.010 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.010 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:27.010 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:27.010 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:27.010 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:27.010 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:27.010 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:27.010 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:27.010 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:27.010 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: ]] 00:17:27.010 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:27.010 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:17:27.010 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.010 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:27.010 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:27.010 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:27.010 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.010 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:27.010 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.010 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.010 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.010 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.011 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:27.011 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:27.011 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:27.011 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.011 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.011 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:27.011 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.011 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:27.011 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:27.011 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:27.011 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.011 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.011 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.011 nvme0n1 00:17:27.011 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.011 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.011 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.011 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.011 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.011 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.011 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.011 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.011 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.011 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODgxOGIwZGJhOGE4NjBiY2E2MDcxODUwNmEyMDEyNzkPeiUc: 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODgxOGIwZGJhOGE4NjBiY2E2MDcxODUwNmEyMDEyNzkPeiUc: 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: ]] 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.271 nvme0n1 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.271 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2U4ZWZjM2I4MWRhMmE4ZGVhODIyMjNhZWY1NTA1NGUyMTZmZDJmNGVlYzgwOGRlylHk+A==: 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2U4ZWZjM2I4MWRhMmE4ZGVhODIyMjNhZWY1NTA1NGUyMTZmZDJmNGVlYzgwOGRlylHk+A==: 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: ]] 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.530 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.530 nvme0n1 00:17:27.531 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.531 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.531 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.531 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.531 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.531 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzNlOWY4NDliZDQ2YzlhNTg4YTQ0MzMxMjBhZDI4MGM5ODkzYTY4N2U2NDI0OWRjMmFiZmY1YjExMThhZTAxN0XI248=: 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzNlOWY4NDliZDQ2YzlhNTg4YTQ0MzMxMjBhZDI4MGM5ODkzYTY4N2U2NDI0OWRjMmFiZmY1YjExMThhZTAxN0XI248=: 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.789 03:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.049 nvme0n1 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmMwYzgwZjk3YzJkMjcwZjNmM2I1NjhkYjQ0OGNjODS/yE/D: 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmMwYzgwZjk3YzJkMjcwZjNmM2I1NjhkYjQ0OGNjODS/yE/D: 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: ]] 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.049 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.346 nvme0n1 00:17:28.346 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.346 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.346 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.346 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.346 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.346 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.630 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.630 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.630 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.630 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.630 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.630 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.630 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:17:28.630 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.630 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:28.631 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:28.631 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:28.631 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:28.631 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:28.631 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:28.631 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:28.631 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:28.631 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: ]] 00:17:28.631 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:28.631 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:17:28.631 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.631 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:28.631 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:28.631 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:28.631 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.631 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:28.631 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.631 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.631 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.631 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.631 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:28.631 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:28.631 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:28.631 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.631 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.631 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:28.631 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.631 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:28.631 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:28.631 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:28.631 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.631 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.631 03:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.889 nvme0n1 00:17:28.889 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.889 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.889 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.889 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.889 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.889 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.889 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.889 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.889 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.889 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.889 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.889 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.889 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:17:28.889 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.889 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:28.889 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:28.889 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:28.889 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODgxOGIwZGJhOGE4NjBiY2E2MDcxODUwNmEyMDEyNzkPeiUc: 00:17:28.889 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: 00:17:28.889 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:28.889 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:28.890 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODgxOGIwZGJhOGE4NjBiY2E2MDcxODUwNmEyMDEyNzkPeiUc: 00:17:28.890 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: ]] 00:17:28.890 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: 00:17:28.890 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:17:28.890 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.890 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:28.890 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:28.890 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:28.890 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.890 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:28.890 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.890 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.890 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.890 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.890 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:28.890 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:28.890 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:28.890 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.890 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.890 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:28.890 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.890 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:28.890 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:28.890 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:28.890 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.890 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.890 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.456 nvme0n1 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2U4ZWZjM2I4MWRhMmE4ZGVhODIyMjNhZWY1NTA1NGUyMTZmZDJmNGVlYzgwOGRlylHk+A==: 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2U4ZWZjM2I4MWRhMmE4ZGVhODIyMjNhZWY1NTA1NGUyMTZmZDJmNGVlYzgwOGRlylHk+A==: 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: ]] 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.456 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.714 nvme0n1 00:17:29.714 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.714 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.714 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.714 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.714 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.714 03:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.972 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.972 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.972 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.972 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.972 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.972 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.972 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:17:29.972 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.972 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:29.972 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:29.972 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:29.972 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzNlOWY4NDliZDQ2YzlhNTg4YTQ0MzMxMjBhZDI4MGM5ODkzYTY4N2U2NDI0OWRjMmFiZmY1YjExMThhZTAxN0XI248=: 00:17:29.972 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:29.972 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:29.972 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:29.972 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzNlOWY4NDliZDQ2YzlhNTg4YTQ0MzMxMjBhZDI4MGM5ODkzYTY4N2U2NDI0OWRjMmFiZmY1YjExMThhZTAxN0XI248=: 00:17:29.972 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:29.972 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:17:29.972 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.972 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:29.972 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:29.972 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:29.972 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.973 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:29.973 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.973 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.973 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.973 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.973 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:29.973 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:29.973 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:29.973 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.973 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.973 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:29.973 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.973 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:29.973 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:29.973 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:29.973 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:29.973 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.973 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.231 nvme0n1 00:17:30.231 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.231 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.231 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.231 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.231 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.231 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.231 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.231 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmMwYzgwZjk3YzJkMjcwZjNmM2I1NjhkYjQ0OGNjODS/yE/D: 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmMwYzgwZjk3YzJkMjcwZjNmM2I1NjhkYjQ0OGNjODS/yE/D: 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: ]] 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.232 03:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.798 nvme0n1 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: ]] 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.798 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.799 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:30.799 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:30.799 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:30.799 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.799 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.799 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:30.799 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.799 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:30.799 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:30.799 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:30.799 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.799 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.799 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.366 nvme0n1 00:17:31.366 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.366 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.366 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.366 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.366 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.366 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.624 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.624 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.624 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.624 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.624 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.624 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.624 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:17:31.624 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.624 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:31.624 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:31.624 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:31.624 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODgxOGIwZGJhOGE4NjBiY2E2MDcxODUwNmEyMDEyNzkPeiUc: 00:17:31.624 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: 00:17:31.624 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:31.624 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:31.624 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODgxOGIwZGJhOGE4NjBiY2E2MDcxODUwNmEyMDEyNzkPeiUc: 00:17:31.624 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: ]] 00:17:31.624 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: 00:17:31.625 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:17:31.625 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.625 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:31.625 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:31.625 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:31.625 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.625 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:31.625 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.625 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.625 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.625 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.625 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:31.625 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:31.625 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:31.625 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.625 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.625 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:31.625 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.625 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:31.625 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:31.625 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:31.625 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.625 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.625 03:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.192 nvme0n1 00:17:32.192 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2U4ZWZjM2I4MWRhMmE4ZGVhODIyMjNhZWY1NTA1NGUyMTZmZDJmNGVlYzgwOGRlylHk+A==: 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2U4ZWZjM2I4MWRhMmE4ZGVhODIyMjNhZWY1NTA1NGUyMTZmZDJmNGVlYzgwOGRlylHk+A==: 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: ]] 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.193 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.766 nvme0n1 00:17:32.766 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.766 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.766 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.766 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:32.766 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.766 03:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.766 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.767 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.767 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.767 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.767 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.767 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:32.767 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:17:32.767 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.767 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:32.767 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:32.767 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:32.767 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzNlOWY4NDliZDQ2YzlhNTg4YTQ0MzMxMjBhZDI4MGM5ODkzYTY4N2U2NDI0OWRjMmFiZmY1YjExMThhZTAxN0XI248=: 00:17:32.767 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:32.767 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:32.767 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:32.767 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzNlOWY4NDliZDQ2YzlhNTg4YTQ0MzMxMjBhZDI4MGM5ODkzYTY4N2U2NDI0OWRjMmFiZmY1YjExMThhZTAxN0XI248=: 00:17:32.767 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:32.767 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:17:32.767 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:32.767 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:32.767 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:32.767 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:32.767 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:32.767 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:32.767 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.767 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.767 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.767 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:32.767 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:32.767 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:32.768 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:32.768 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.768 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.768 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:32.768 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.768 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:32.768 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:32.768 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:32.768 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:32.768 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.768 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.708 nvme0n1 00:17:33.708 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.708 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.708 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.708 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.708 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.708 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.708 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.708 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.708 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.708 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.708 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.708 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:33.708 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:33.708 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.708 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:17:33.708 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.708 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmMwYzgwZjk3YzJkMjcwZjNmM2I1NjhkYjQ0OGNjODS/yE/D: 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmMwYzgwZjk3YzJkMjcwZjNmM2I1NjhkYjQ0OGNjODS/yE/D: 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: ]] 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.709 nvme0n1 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: ]] 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.709 03:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.969 nvme0n1 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODgxOGIwZGJhOGE4NjBiY2E2MDcxODUwNmEyMDEyNzkPeiUc: 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODgxOGIwZGJhOGE4NjBiY2E2MDcxODUwNmEyMDEyNzkPeiUc: 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: ]] 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.969 nvme0n1 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.969 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2U4ZWZjM2I4MWRhMmE4ZGVhODIyMjNhZWY1NTA1NGUyMTZmZDJmNGVlYzgwOGRlylHk+A==: 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2U4ZWZjM2I4MWRhMmE4ZGVhODIyMjNhZWY1NTA1NGUyMTZmZDJmNGVlYzgwOGRlylHk+A==: 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: ]] 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.228 nvme0n1 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.228 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzNlOWY4NDliZDQ2YzlhNTg4YTQ0MzMxMjBhZDI4MGM5ODkzYTY4N2U2NDI0OWRjMmFiZmY1YjExMThhZTAxN0XI248=: 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzNlOWY4NDliZDQ2YzlhNTg4YTQ0MzMxMjBhZDI4MGM5ODkzYTY4N2U2NDI0OWRjMmFiZmY1YjExMThhZTAxN0XI248=: 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.229 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.488 nvme0n1 00:17:34.488 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.488 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.488 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.488 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.488 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.488 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.488 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.488 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.488 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.488 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.488 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.488 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:34.488 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.488 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:17:34.488 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.488 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:34.488 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:34.488 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:34.488 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmMwYzgwZjk3YzJkMjcwZjNmM2I1NjhkYjQ0OGNjODS/yE/D: 00:17:34.488 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: 00:17:34.488 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:34.488 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:34.488 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmMwYzgwZjk3YzJkMjcwZjNmM2I1NjhkYjQ0OGNjODS/yE/D: 00:17:34.488 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: ]] 00:17:34.488 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: 00:17:34.488 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:17:34.488 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.489 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:34.489 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:34.489 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:34.489 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.489 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:34.489 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.489 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.489 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.489 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.489 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:34.489 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:34.489 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:34.489 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.489 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.489 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:34.489 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.489 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:34.489 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:34.489 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:34.489 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.489 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.489 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.748 nvme0n1 00:17:34.748 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.748 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.748 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.748 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.748 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.748 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.748 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.748 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.748 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.748 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.748 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.748 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.748 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:17:34.748 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.748 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:34.748 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:34.748 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:34.748 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:34.748 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:34.748 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:34.748 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:34.748 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:34.748 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: ]] 00:17:34.749 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:34.749 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:17:34.749 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.749 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:34.749 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:34.749 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:34.749 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.749 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:34.749 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.749 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.749 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.749 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.749 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:34.749 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:34.749 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:34.749 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.749 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.749 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:34.749 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.749 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:34.749 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:34.749 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:34.749 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.749 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.749 03:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.749 nvme0n1 00:17:34.749 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.749 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.749 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.749 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.749 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODgxOGIwZGJhOGE4NjBiY2E2MDcxODUwNmEyMDEyNzkPeiUc: 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODgxOGIwZGJhOGE4NjBiY2E2MDcxODUwNmEyMDEyNzkPeiUc: 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: ]] 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.008 nvme0n1 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.008 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2U4ZWZjM2I4MWRhMmE4ZGVhODIyMjNhZWY1NTA1NGUyMTZmZDJmNGVlYzgwOGRlylHk+A==: 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2U4ZWZjM2I4MWRhMmE4ZGVhODIyMjNhZWY1NTA1NGUyMTZmZDJmNGVlYzgwOGRlylHk+A==: 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: ]] 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.268 nvme0n1 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzNlOWY4NDliZDQ2YzlhNTg4YTQ0MzMxMjBhZDI4MGM5ODkzYTY4N2U2NDI0OWRjMmFiZmY1YjExMThhZTAxN0XI248=: 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzNlOWY4NDliZDQ2YzlhNTg4YTQ0MzMxMjBhZDI4MGM5ODkzYTY4N2U2NDI0OWRjMmFiZmY1YjExMThhZTAxN0XI248=: 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.268 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.527 nvme0n1 00:17:35.527 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.527 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmMwYzgwZjk3YzJkMjcwZjNmM2I1NjhkYjQ0OGNjODS/yE/D: 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmMwYzgwZjk3YzJkMjcwZjNmM2I1NjhkYjQ0OGNjODS/yE/D: 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: ]] 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.528 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.786 nvme0n1 00:17:35.786 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.787 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.787 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.787 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.787 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.787 03:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: ]] 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.787 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.046 nvme0n1 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODgxOGIwZGJhOGE4NjBiY2E2MDcxODUwNmEyMDEyNzkPeiUc: 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODgxOGIwZGJhOGE4NjBiY2E2MDcxODUwNmEyMDEyNzkPeiUc: 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: ]] 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.046 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.306 nvme0n1 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2U4ZWZjM2I4MWRhMmE4ZGVhODIyMjNhZWY1NTA1NGUyMTZmZDJmNGVlYzgwOGRlylHk+A==: 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2U4ZWZjM2I4MWRhMmE4ZGVhODIyMjNhZWY1NTA1NGUyMTZmZDJmNGVlYzgwOGRlylHk+A==: 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: ]] 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.306 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.564 nvme0n1 00:17:36.564 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.564 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzNlOWY4NDliZDQ2YzlhNTg4YTQ0MzMxMjBhZDI4MGM5ODkzYTY4N2U2NDI0OWRjMmFiZmY1YjExMThhZTAxN0XI248=: 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzNlOWY4NDliZDQ2YzlhNTg4YTQ0MzMxMjBhZDI4MGM5ODkzYTY4N2U2NDI0OWRjMmFiZmY1YjExMThhZTAxN0XI248=: 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.565 03:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.824 nvme0n1 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmMwYzgwZjk3YzJkMjcwZjNmM2I1NjhkYjQ0OGNjODS/yE/D: 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmMwYzgwZjk3YzJkMjcwZjNmM2I1NjhkYjQ0OGNjODS/yE/D: 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: ]] 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.824 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.391 nvme0n1 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: ]] 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.391 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:37.392 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:37.392 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:37.392 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:37.392 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.392 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.392 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:37.392 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.392 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:37.392 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:37.392 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:37.392 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.392 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.392 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.651 nvme0n1 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODgxOGIwZGJhOGE4NjBiY2E2MDcxODUwNmEyMDEyNzkPeiUc: 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODgxOGIwZGJhOGE4NjBiY2E2MDcxODUwNmEyMDEyNzkPeiUc: 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: ]] 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.651 03:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.218 nvme0n1 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2U4ZWZjM2I4MWRhMmE4ZGVhODIyMjNhZWY1NTA1NGUyMTZmZDJmNGVlYzgwOGRlylHk+A==: 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2U4ZWZjM2I4MWRhMmE4ZGVhODIyMjNhZWY1NTA1NGUyMTZmZDJmNGVlYzgwOGRlylHk+A==: 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: ]] 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:38.218 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:38.219 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:38.219 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.219 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.219 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:38.219 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.219 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:38.219 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:38.219 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:38.219 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:38.219 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.219 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.488 nvme0n1 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzNlOWY4NDliZDQ2YzlhNTg4YTQ0MzMxMjBhZDI4MGM5ODkzYTY4N2U2NDI0OWRjMmFiZmY1YjExMThhZTAxN0XI248=: 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzNlOWY4NDliZDQ2YzlhNTg4YTQ0MzMxMjBhZDI4MGM5ODkzYTY4N2U2NDI0OWRjMmFiZmY1YjExMThhZTAxN0XI248=: 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.488 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.746 nvme0n1 00:17:38.746 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.746 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.746 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.747 03:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.747 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.747 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.005 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.005 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmMwYzgwZjk3YzJkMjcwZjNmM2I1NjhkYjQ0OGNjODS/yE/D: 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmMwYzgwZjk3YzJkMjcwZjNmM2I1NjhkYjQ0OGNjODS/yE/D: 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: ]] 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRiYjA3MzM0YTg4ZjQ1ZjgzM2QyZDc2ODExNDU0NTFlZDkyOWViNDM0NDgyMTI1ODkyZmUxYWYxZGFjNzllZc9G0ik=: 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.006 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.574 nvme0n1 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: ]] 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.574 03:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.142 nvme0n1 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODgxOGIwZGJhOGE4NjBiY2E2MDcxODUwNmEyMDEyNzkPeiUc: 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODgxOGIwZGJhOGE4NjBiY2E2MDcxODUwNmEyMDEyNzkPeiUc: 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: ]] 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.142 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.709 nvme0n1 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2U4ZWZjM2I4MWRhMmE4ZGVhODIyMjNhZWY1NTA1NGUyMTZmZDJmNGVlYzgwOGRlylHk+A==: 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2U4ZWZjM2I4MWRhMmE4ZGVhODIyMjNhZWY1NTA1NGUyMTZmZDJmNGVlYzgwOGRlylHk+A==: 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: ]] 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ1OWI2MDVkZjNjZWRhZmVhNmVkYjRiNTEzMjk1NjBYd0t7: 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.709 03:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.276 nvme0n1 00:17:41.276 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.276 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzNlOWY4NDliZDQ2YzlhNTg4YTQ0MzMxMjBhZDI4MGM5ODkzYTY4N2U2NDI0OWRjMmFiZmY1YjExMThhZTAxN0XI248=: 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzNlOWY4NDliZDQ2YzlhNTg4YTQ0MzMxMjBhZDI4MGM5ODkzYTY4N2U2NDI0OWRjMmFiZmY1YjExMThhZTAxN0XI248=: 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.277 03:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.844 nvme0n1 00:17:41.844 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.844 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:41.844 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:41.844 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.844 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.844 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.844 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.844 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:41.844 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.844 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.844 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.844 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:41.844 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:41.844 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:41.844 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:41.844 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:41.844 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:41.844 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:41.844 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:41.844 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:41.844 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:41.844 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: ]] 00:17:41.844 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:41.844 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:41.845 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.845 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.845 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.845 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:17:41.845 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:41.845 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:41.845 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:41.845 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.845 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.845 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.104 request: 00:17:42.104 { 00:17:42.104 "name": "nvme0", 00:17:42.104 "trtype": "tcp", 00:17:42.104 "traddr": "10.0.0.1", 00:17:42.104 "adrfam": "ipv4", 00:17:42.104 "trsvcid": "4420", 00:17:42.104 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:42.104 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:42.104 "prchk_reftag": false, 00:17:42.104 "prchk_guard": false, 00:17:42.104 "hdgst": false, 00:17:42.104 "ddgst": false, 00:17:42.104 "allow_unrecognized_csi": false, 00:17:42.104 "method": "bdev_nvme_attach_controller", 00:17:42.104 "req_id": 1 00:17:42.104 } 00:17:42.104 Got JSON-RPC error response 00:17:42.104 response: 00:17:42.104 { 00:17:42.104 "code": -5, 00:17:42.104 "message": "Input/output error" 00:17:42.104 } 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.104 request: 00:17:42.104 { 00:17:42.104 "name": "nvme0", 00:17:42.104 "trtype": "tcp", 00:17:42.104 "traddr": "10.0.0.1", 00:17:42.104 "adrfam": "ipv4", 00:17:42.104 "trsvcid": "4420", 00:17:42.104 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:42.104 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:42.104 "prchk_reftag": false, 00:17:42.104 "prchk_guard": false, 00:17:42.104 "hdgst": false, 00:17:42.104 "ddgst": false, 00:17:42.104 "dhchap_key": "key2", 00:17:42.104 "allow_unrecognized_csi": false, 00:17:42.104 "method": "bdev_nvme_attach_controller", 00:17:42.104 "req_id": 1 00:17:42.104 } 00:17:42.104 Got JSON-RPC error response 00:17:42.104 response: 00:17:42.104 { 00:17:42.104 "code": -5, 00:17:42.104 "message": "Input/output error" 00:17:42.104 } 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:42.104 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:42.105 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:42.105 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:42.105 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:42.105 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:42.105 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:42.105 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:42.105 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:42.105 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:42.105 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:42.105 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:42.105 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.105 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.105 request: 00:17:42.105 { 00:17:42.105 "name": "nvme0", 00:17:42.105 "trtype": "tcp", 00:17:42.105 "traddr": "10.0.0.1", 00:17:42.105 "adrfam": "ipv4", 00:17:42.105 "trsvcid": "4420", 00:17:42.105 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:42.105 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:42.105 "prchk_reftag": false, 00:17:42.105 "prchk_guard": false, 00:17:42.105 "hdgst": false, 00:17:42.105 "ddgst": false, 00:17:42.105 "dhchap_key": "key1", 00:17:42.105 "dhchap_ctrlr_key": "ckey2", 00:17:42.105 "allow_unrecognized_csi": false, 00:17:42.105 "method": "bdev_nvme_attach_controller", 00:17:42.105 "req_id": 1 00:17:42.105 } 00:17:42.105 Got JSON-RPC error response 00:17:42.105 response: 00:17:42.105 { 00:17:42.105 "code": -5, 00:17:42.105 "message": "Input/output error" 00:17:42.105 } 00:17:42.105 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:42.105 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:42.105 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:42.105 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:42.105 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:42.105 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:17:42.105 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:42.105 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:42.105 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:42.105 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:42.105 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:42.105 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:42.105 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:42.105 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:42.105 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:42.105 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:42.105 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:42.105 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.105 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.364 nvme0n1 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODgxOGIwZGJhOGE4NjBiY2E2MDcxODUwNmEyMDEyNzkPeiUc: 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODgxOGIwZGJhOGE4NjBiY2E2MDcxODUwNmEyMDEyNzkPeiUc: 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: ]] 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.364 request: 00:17:42.364 { 00:17:42.364 "name": "nvme0", 00:17:42.364 "dhchap_key": "key1", 00:17:42.364 "dhchap_ctrlr_key": "ckey2", 00:17:42.364 "method": "bdev_nvme_set_keys", 00:17:42.364 "req_id": 1 00:17:42.364 } 00:17:42.364 Got JSON-RPC error response 00:17:42.364 response: 00:17:42.364 { 00:17:42.364 "code": -13, 00:17:42.364 "message": "Permission denied" 00:17:42.364 } 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:17:42.364 03:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q4Njg3N2EyMmI5MTMxYTAzZWRlYmVhYmNjN2JlMDhkNjY3NTU0ZGFhZDlkOGUyVWbSEA==: 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: ]] 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzIzNjBmN2VjNDEzOGE1OGFjNDU3ODIwY2I5M2ZjMWZjMzc2OTc3ODU5NDYzYzU00YwePQ==: 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.740 nvme0n1 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODgxOGIwZGJhOGE4NjBiY2E2MDcxODUwNmEyMDEyNzkPeiUc: 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODgxOGIwZGJhOGE4NjBiY2E2MDcxODUwNmEyMDEyNzkPeiUc: 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: ]] 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQxZmM5MWI3MTA2M2I5ZmNkODk3ZGFkMTY5MGMwMDYxxcp4: 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.740 request: 00:17:43.740 { 00:17:43.740 "name": "nvme0", 00:17:43.740 "dhchap_key": "key2", 00:17:43.740 "dhchap_ctrlr_key": "ckey1", 00:17:43.740 "method": "bdev_nvme_set_keys", 00:17:43.740 "req_id": 1 00:17:43.740 } 00:17:43.740 Got JSON-RPC error response 00:17:43.740 response: 00:17:43.740 { 00:17:43.740 "code": -13, 00:17:43.740 "message": "Permission denied" 00:17:43.740 } 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:17:43.740 03:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:17:44.676 03:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:17:44.676 03:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.676 03:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.676 03:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:17:44.676 03:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.676 03:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:17:44.676 03:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:17:44.676 03:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:17:44.676 03:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:17:44.676 03:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:44.676 03:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:17:44.676 03:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:44.676 03:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:17:44.676 03:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:44.676 03:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:44.676 rmmod nvme_tcp 00:17:44.935 rmmod nvme_fabrics 00:17:44.935 03:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:44.935 03:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:17:44.935 03:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:17:44.935 03:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 78355 ']' 00:17:44.935 03:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 78355 00:17:44.935 03:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 78355 ']' 00:17:44.935 03:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 78355 00:17:44.935 03:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:17:44.935 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:44.935 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78355 00:17:44.935 killing process with pid 78355 00:17:44.935 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:44.935 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:44.935 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78355' 00:17:44.935 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 78355 00:17:44.935 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 78355 00:17:45.193 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:45.193 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:45.193 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:45.193 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:17:45.193 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:17:45.193 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:45.193 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:17:45.193 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:45.193 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:45.193 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:45.193 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:45.193 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:45.193 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:45.193 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:45.193 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:45.193 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:45.193 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:45.193 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:45.193 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:45.193 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:45.193 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:45.193 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:45.193 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:45.193 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.193 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:45.193 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.451 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:17:45.451 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:45.451 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:45.451 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:17:45.451 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:17:45.451 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:17:45.451 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:45.451 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:45.451 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:45.451 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:45.451 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:17:45.451 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:17:45.451 03:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:46.018 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:46.277 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:46.277 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:46.277 03:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.wwI /tmp/spdk.key-null.Jtz /tmp/spdk.key-sha256.1WK /tmp/spdk.key-sha384.xHT /tmp/spdk.key-sha512.VZf /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:17:46.277 03:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:46.535 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:46.793 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:46.793 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:46.793 00:17:46.793 real 0m37.890s 00:17:46.793 user 0m34.654s 00:17:46.793 sys 0m3.999s 00:17:46.793 03:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:46.794 03:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.794 ************************************ 00:17:46.794 END TEST nvmf_auth_host 00:17:46.794 ************************************ 00:17:46.794 03:20:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:17:46.794 03:20:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:46.794 03:20:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:46.794 03:20:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:46.794 03:20:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.794 ************************************ 00:17:46.794 START TEST nvmf_digest 00:17:46.794 ************************************ 00:17:46.794 03:20:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:46.794 * Looking for test storage... 00:17:46.794 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:46.794 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:46.794 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:46.794 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:47.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.054 --rc genhtml_branch_coverage=1 00:17:47.054 --rc genhtml_function_coverage=1 00:17:47.054 --rc genhtml_legend=1 00:17:47.054 --rc geninfo_all_blocks=1 00:17:47.054 --rc geninfo_unexecuted_blocks=1 00:17:47.054 00:17:47.054 ' 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:47.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.054 --rc genhtml_branch_coverage=1 00:17:47.054 --rc genhtml_function_coverage=1 00:17:47.054 --rc genhtml_legend=1 00:17:47.054 --rc geninfo_all_blocks=1 00:17:47.054 --rc geninfo_unexecuted_blocks=1 00:17:47.054 00:17:47.054 ' 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:47.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.054 --rc genhtml_branch_coverage=1 00:17:47.054 --rc genhtml_function_coverage=1 00:17:47.054 --rc genhtml_legend=1 00:17:47.054 --rc geninfo_all_blocks=1 00:17:47.054 --rc geninfo_unexecuted_blocks=1 00:17:47.054 00:17:47.054 ' 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:47.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.054 --rc genhtml_branch_coverage=1 00:17:47.054 --rc genhtml_function_coverage=1 00:17:47.054 --rc genhtml_legend=1 00:17:47.054 --rc geninfo_all_blocks=1 00:17:47.054 --rc geninfo_unexecuted_blocks=1 00:17:47.054 00:17:47.054 ' 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:17:47.054 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:47.055 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@458 -- # nvmf_veth_init 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:47.055 Cannot find device "nvmf_init_br" 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:47.055 Cannot find device "nvmf_init_br2" 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:47.055 Cannot find device "nvmf_tgt_br" 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:47.055 Cannot find device "nvmf_tgt_br2" 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:47.055 Cannot find device "nvmf_init_br" 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:47.055 Cannot find device "nvmf_init_br2" 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:47.055 Cannot find device "nvmf_tgt_br" 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:47.055 Cannot find device "nvmf_tgt_br2" 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:47.055 Cannot find device "nvmf_br" 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:47.055 Cannot find device "nvmf_init_if" 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:47.055 Cannot find device "nvmf_init_if2" 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:47.055 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:47.055 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:47.055 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:47.315 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:47.315 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:17:47.315 00:17:47.315 --- 10.0.0.3 ping statistics --- 00:17:47.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.315 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:47.315 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:47.315 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:17:47.315 00:17:47.315 --- 10.0.0.4 ping statistics --- 00:17:47.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.315 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:47.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:47.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:47.315 00:17:47.315 --- 10.0.0.1 ping statistics --- 00:17:47.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.315 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:47.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:47.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:17:47.315 00:17:47.315 --- 10.0.0.2 ping statistics --- 00:17:47.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.315 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # return 0 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:47.315 ************************************ 00:17:47.315 START TEST nvmf_digest_clean 00:17:47.315 ************************************ 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:47.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=80028 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 80028 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 80028 ']' 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:47.315 03:20:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:47.315 [2024-10-09 03:20:30.615322] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:17:47.315 [2024-10-09 03:20:30.615422] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.574 [2024-10-09 03:20:30.756799] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.574 [2024-10-09 03:20:30.870607] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:47.574 [2024-10-09 03:20:30.870912] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:47.574 [2024-10-09 03:20:30.870948] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:47.574 [2024-10-09 03:20:30.870960] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:47.574 [2024-10-09 03:20:30.870968] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:47.574 [2024-10-09 03:20:30.871466] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.511 03:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:48.511 03:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:17:48.511 03:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:48.511 03:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:48.511 03:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:48.511 03:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:48.511 03:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:17:48.511 03:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:17:48.511 03:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:17:48.511 03:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.511 03:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:48.511 [2024-10-09 03:20:31.749064] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:48.511 null0 00:17:48.511 [2024-10-09 03:20:31.800173] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:48.770 [2024-10-09 03:20:31.824284] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:48.770 03:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.770 03:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:17:48.770 03:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:48.770 03:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:48.770 03:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:48.770 03:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:48.770 03:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:48.770 03:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:48.770 03:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80060 00:17:48.770 03:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:48.770 03:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80060 /var/tmp/bperf.sock 00:17:48.770 03:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 80060 ']' 00:17:48.770 03:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:48.770 03:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:48.770 03:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:48.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:48.770 03:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:48.770 03:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:48.770 [2024-10-09 03:20:31.887162] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:17:48.770 [2024-10-09 03:20:31.887422] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80060 ] 00:17:48.770 [2024-10-09 03:20:32.023082] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.028 [2024-10-09 03:20:32.127561] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.595 03:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:49.595 03:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:17:49.595 03:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:49.595 03:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:49.595 03:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:49.854 [2024-10-09 03:20:33.121834] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:50.113 03:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:50.113 03:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:50.372 nvme0n1 00:17:50.372 03:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:50.372 03:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:50.372 Running I/O for 2 seconds... 00:17:52.362 14732.00 IOPS, 57.55 MiB/s [2024-10-09T03:20:35.665Z] 14795.50 IOPS, 57.79 MiB/s 00:17:52.362 Latency(us) 00:17:52.362 [2024-10-09T03:20:35.665Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.362 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:52.362 nvme0n1 : 2.00 14822.69 57.90 0.00 0.00 8629.78 8340.95 21805.61 00:17:52.362 [2024-10-09T03:20:35.665Z] =================================================================================================================== 00:17:52.362 [2024-10-09T03:20:35.665Z] Total : 14822.69 57.90 0.00 0.00 8629.78 8340.95 21805.61 00:17:52.362 { 00:17:52.362 "results": [ 00:17:52.362 { 00:17:52.362 "job": "nvme0n1", 00:17:52.362 "core_mask": "0x2", 00:17:52.362 "workload": "randread", 00:17:52.362 "status": "finished", 00:17:52.362 "queue_depth": 128, 00:17:52.362 "io_size": 4096, 00:17:52.362 "runtime": 2.004967, 00:17:52.362 "iops": 14822.68785471282, 00:17:52.362 "mibps": 57.90112443247195, 00:17:52.362 "io_failed": 0, 00:17:52.362 "io_timeout": 0, 00:17:52.362 "avg_latency_us": 8629.779052641561, 00:17:52.362 "min_latency_us": 8340.945454545454, 00:17:52.362 "max_latency_us": 21805.614545454544 00:17:52.362 } 00:17:52.362 ], 00:17:52.362 "core_count": 1 00:17:52.362 } 00:17:52.621 03:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:52.621 03:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:52.621 03:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:52.621 03:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:52.621 03:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:52.621 | select(.opcode=="crc32c") 00:17:52.621 | "\(.module_name) \(.executed)"' 00:17:52.880 03:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:52.880 03:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:52.880 03:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:52.880 03:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:52.880 03:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80060 00:17:52.880 03:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 80060 ']' 00:17:52.880 03:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 80060 00:17:52.880 03:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:17:52.880 03:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:52.880 03:20:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80060 00:17:52.880 03:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:52.880 03:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:52.880 killing process with pid 80060 00:17:52.880 03:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80060' 00:17:52.880 Received shutdown signal, test time was about 2.000000 seconds 00:17:52.880 00:17:52.880 Latency(us) 00:17:52.880 [2024-10-09T03:20:36.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.880 [2024-10-09T03:20:36.183Z] =================================================================================================================== 00:17:52.880 [2024-10-09T03:20:36.183Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:52.880 03:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 80060 00:17:52.880 03:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 80060 00:17:53.138 03:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:17:53.138 03:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:53.138 03:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:53.138 03:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:53.138 03:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:53.138 03:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:53.138 03:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:53.138 03:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:53.138 03:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80122 00:17:53.139 03:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80122 /var/tmp/bperf.sock 00:17:53.139 03:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 80122 ']' 00:17:53.139 03:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:53.139 03:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:53.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:53.139 03:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:53.139 03:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:53.139 03:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:53.139 [2024-10-09 03:20:36.282560] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:17:53.139 [2024-10-09 03:20:36.282643] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80122 ] 00:17:53.139 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:53.139 Zero copy mechanism will not be used. 00:17:53.139 [2024-10-09 03:20:36.417312] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.397 [2024-10-09 03:20:36.526723] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.334 03:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:54.334 03:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:17:54.334 03:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:54.334 03:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:54.334 03:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:54.592 [2024-10-09 03:20:37.680332] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:54.592 03:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:54.592 03:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:54.851 nvme0n1 00:17:54.851 03:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:54.851 03:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:55.109 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:55.109 Zero copy mechanism will not be used. 00:17:55.109 Running I/O for 2 seconds... 00:17:56.979 7488.00 IOPS, 936.00 MiB/s [2024-10-09T03:20:40.282Z] 7496.00 IOPS, 937.00 MiB/s 00:17:56.979 Latency(us) 00:17:56.979 [2024-10-09T03:20:40.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.979 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:56.979 nvme0n1 : 2.00 7492.94 936.62 0.00 0.00 2131.98 1980.97 4468.36 00:17:56.979 [2024-10-09T03:20:40.282Z] =================================================================================================================== 00:17:56.979 [2024-10-09T03:20:40.282Z] Total : 7492.94 936.62 0.00 0.00 2131.98 1980.97 4468.36 00:17:56.979 { 00:17:56.979 "results": [ 00:17:56.979 { 00:17:56.979 "job": "nvme0n1", 00:17:56.979 "core_mask": "0x2", 00:17:56.979 "workload": "randread", 00:17:56.979 "status": "finished", 00:17:56.979 "queue_depth": 16, 00:17:56.979 "io_size": 131072, 00:17:56.979 "runtime": 2.002952, 00:17:56.979 "iops": 7492.940419940168, 00:17:56.979 "mibps": 936.617552492521, 00:17:56.979 "io_failed": 0, 00:17:56.979 "io_timeout": 0, 00:17:56.979 "avg_latency_us": 2131.975344058926, 00:17:56.979 "min_latency_us": 1980.9745454545455, 00:17:56.979 "max_latency_us": 4468.363636363636 00:17:56.979 } 00:17:56.979 ], 00:17:56.979 "core_count": 1 00:17:56.979 } 00:17:56.979 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:56.979 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:56.979 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:56.979 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:56.979 | select(.opcode=="crc32c") 00:17:56.979 | "\(.module_name) \(.executed)"' 00:17:56.979 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:57.546 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:57.546 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:57.546 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:57.546 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:57.546 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80122 00:17:57.546 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 80122 ']' 00:17:57.546 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 80122 00:17:57.546 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:17:57.546 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:57.546 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80122 00:17:57.546 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:57.546 killing process with pid 80122 00:17:57.546 Received shutdown signal, test time was about 2.000000 seconds 00:17:57.546 00:17:57.546 Latency(us) 00:17:57.546 [2024-10-09T03:20:40.849Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.546 [2024-10-09T03:20:40.849Z] =================================================================================================================== 00:17:57.546 [2024-10-09T03:20:40.849Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:57.546 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:57.546 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80122' 00:17:57.546 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 80122 00:17:57.546 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 80122 00:17:57.804 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:17:57.804 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:57.804 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:57.804 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:57.804 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:57.805 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:57.805 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:57.805 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:57.805 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80183 00:17:57.805 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80183 /var/tmp/bperf.sock 00:17:57.805 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 80183 ']' 00:17:57.805 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:57.805 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:57.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:57.805 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:57.805 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:57.805 03:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:57.805 [2024-10-09 03:20:40.909810] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:17:57.805 [2024-10-09 03:20:40.909910] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80183 ] 00:17:57.805 [2024-10-09 03:20:41.043660] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.064 [2024-10-09 03:20:41.157622] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.064 03:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:58.064 03:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:17:58.064 03:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:58.064 03:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:58.064 03:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:58.323 [2024-10-09 03:20:41.535547] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:58.323 03:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:58.323 03:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:58.890 nvme0n1 00:17:58.890 03:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:58.890 03:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:58.890 Running I/O for 2 seconds... 00:18:00.762 16892.00 IOPS, 65.98 MiB/s [2024-10-09T03:20:44.065Z] 17590.00 IOPS, 68.71 MiB/s 00:18:00.762 Latency(us) 00:18:00.762 [2024-10-09T03:20:44.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.762 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:00.762 nvme0n1 : 2.01 17587.97 68.70 0.00 0.00 7271.99 6494.02 15013.70 00:18:00.762 [2024-10-09T03:20:44.065Z] =================================================================================================================== 00:18:00.762 [2024-10-09T03:20:44.065Z] Total : 17587.97 68.70 0.00 0.00 7271.99 6494.02 15013.70 00:18:00.762 { 00:18:00.762 "results": [ 00:18:00.762 { 00:18:00.762 "job": "nvme0n1", 00:18:00.762 "core_mask": "0x2", 00:18:00.762 "workload": "randwrite", 00:18:00.762 "status": "finished", 00:18:00.762 "queue_depth": 128, 00:18:00.762 "io_size": 4096, 00:18:00.762 "runtime": 2.007508, 00:18:00.762 "iops": 17587.97474281547, 00:18:00.762 "mibps": 68.70302633912293, 00:18:00.762 "io_failed": 0, 00:18:00.762 "io_timeout": 0, 00:18:00.762 "avg_latency_us": 7271.988883693627, 00:18:00.762 "min_latency_us": 6494.021818181818, 00:18:00.762 "max_latency_us": 15013.701818181818 00:18:00.762 } 00:18:00.762 ], 00:18:00.762 "core_count": 1 00:18:00.762 } 00:18:00.763 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:00.763 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:00.763 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:00.763 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:00.763 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:00.763 | select(.opcode=="crc32c") 00:18:00.763 | "\(.module_name) \(.executed)"' 00:18:01.329 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:01.330 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:01.330 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:01.330 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:01.330 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80183 00:18:01.330 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 80183 ']' 00:18:01.330 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 80183 00:18:01.330 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:18:01.330 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:01.330 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80183 00:18:01.330 killing process with pid 80183 00:18:01.330 Received shutdown signal, test time was about 2.000000 seconds 00:18:01.330 00:18:01.330 Latency(us) 00:18:01.330 [2024-10-09T03:20:44.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.330 [2024-10-09T03:20:44.633Z] =================================================================================================================== 00:18:01.330 [2024-10-09T03:20:44.633Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:01.330 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:01.330 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:01.330 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80183' 00:18:01.330 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 80183 00:18:01.330 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 80183 00:18:01.330 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:18:01.330 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:01.330 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:01.330 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:01.330 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:01.330 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:01.330 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:01.330 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80236 00:18:01.330 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80236 /var/tmp/bperf.sock 00:18:01.330 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:01.330 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 80236 ']' 00:18:01.330 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:01.330 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:01.330 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:01.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:01.330 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:01.330 03:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:01.588 [2024-10-09 03:20:44.664662] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:18:01.588 [2024-10-09 03:20:44.664965] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80236 ] 00:18:01.588 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:01.588 Zero copy mechanism will not be used. 00:18:01.588 [2024-10-09 03:20:44.803352] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.846 [2024-10-09 03:20:44.902264] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.412 03:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:02.412 03:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:18:02.412 03:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:02.412 03:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:02.412 03:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:02.669 [2024-10-09 03:20:45.943597] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:02.926 03:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:02.926 03:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:03.184 nvme0n1 00:18:03.184 03:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:03.184 03:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:03.184 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:03.184 Zero copy mechanism will not be used. 00:18:03.184 Running I/O for 2 seconds... 00:18:05.516 6906.00 IOPS, 863.25 MiB/s [2024-10-09T03:20:48.819Z] 6944.00 IOPS, 868.00 MiB/s 00:18:05.516 Latency(us) 00:18:05.516 [2024-10-09T03:20:48.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.516 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:05.516 nvme0n1 : 2.00 6938.55 867.32 0.00 0.00 2300.66 1779.90 9055.88 00:18:05.516 [2024-10-09T03:20:48.819Z] =================================================================================================================== 00:18:05.516 [2024-10-09T03:20:48.819Z] Total : 6938.55 867.32 0.00 0.00 2300.66 1779.90 9055.88 00:18:05.516 { 00:18:05.516 "results": [ 00:18:05.516 { 00:18:05.516 "job": "nvme0n1", 00:18:05.516 "core_mask": "0x2", 00:18:05.516 "workload": "randwrite", 00:18:05.516 "status": "finished", 00:18:05.516 "queue_depth": 16, 00:18:05.516 "io_size": 131072, 00:18:05.516 "runtime": 2.003877, 00:18:05.516 "iops": 6938.549621558609, 00:18:05.516 "mibps": 867.3187026948261, 00:18:05.516 "io_failed": 0, 00:18:05.516 "io_timeout": 0, 00:18:05.516 "avg_latency_us": 2300.657457893085, 00:18:05.516 "min_latency_us": 1779.898181818182, 00:18:05.516 "max_latency_us": 9055.883636363636 00:18:05.516 } 00:18:05.516 ], 00:18:05.516 "core_count": 1 00:18:05.516 } 00:18:05.516 03:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:05.516 03:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:05.516 03:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:05.516 03:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:05.516 | select(.opcode=="crc32c") 00:18:05.516 | "\(.module_name) \(.executed)"' 00:18:05.516 03:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:05.517 03:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:05.517 03:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:05.517 03:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:05.517 03:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:05.517 03:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80236 00:18:05.517 03:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 80236 ']' 00:18:05.517 03:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 80236 00:18:05.517 03:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:18:05.517 03:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:05.517 03:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80236 00:18:05.517 killing process with pid 80236 00:18:05.517 Received shutdown signal, test time was about 2.000000 seconds 00:18:05.517 00:18:05.517 Latency(us) 00:18:05.517 [2024-10-09T03:20:48.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.517 [2024-10-09T03:20:48.820Z] =================================================================================================================== 00:18:05.517 [2024-10-09T03:20:48.820Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:05.517 03:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:05.517 03:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:05.517 03:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80236' 00:18:05.517 03:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 80236 00:18:05.517 03:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 80236 00:18:05.775 03:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80028 00:18:05.775 03:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 80028 ']' 00:18:05.775 03:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 80028 00:18:05.775 03:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:18:05.775 03:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:05.775 03:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80028 00:18:05.775 killing process with pid 80028 00:18:05.775 03:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:05.775 03:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:05.775 03:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80028' 00:18:05.775 03:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 80028 00:18:05.775 03:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 80028 00:18:06.034 00:18:06.034 real 0m18.677s 00:18:06.034 user 0m36.604s 00:18:06.034 sys 0m4.703s 00:18:06.034 03:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:06.034 ************************************ 00:18:06.034 END TEST nvmf_digest_clean 00:18:06.034 ************************************ 00:18:06.034 03:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:06.034 03:20:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:18:06.034 03:20:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:06.034 03:20:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:06.034 03:20:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:06.034 ************************************ 00:18:06.034 START TEST nvmf_digest_error 00:18:06.034 ************************************ 00:18:06.034 03:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:18:06.034 03:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:18:06.034 03:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:06.034 03:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:06.034 03:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:06.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.034 03:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=80319 00:18:06.034 03:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 80319 00:18:06.034 03:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 80319 ']' 00:18:06.034 03:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:06.034 03:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.034 03:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:06.034 03:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.034 03:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:06.034 03:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:06.293 [2024-10-09 03:20:49.354399] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:18:06.293 [2024-10-09 03:20:49.354536] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:06.293 [2024-10-09 03:20:49.494956] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.293 [2024-10-09 03:20:49.587784] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:06.293 [2024-10-09 03:20:49.587838] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:06.293 [2024-10-09 03:20:49.587864] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:06.293 [2024-10-09 03:20:49.587872] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:06.293 [2024-10-09 03:20:49.587878] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:06.293 [2024-10-09 03:20:49.588223] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.228 03:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:07.228 03:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:18:07.228 03:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:07.228 03:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:07.228 03:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:07.228 03:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:07.228 03:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:18:07.228 03:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.228 03:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:07.228 [2024-10-09 03:20:50.408660] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:18:07.228 03:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.228 03:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:18:07.228 03:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:18:07.228 03:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.228 03:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:07.228 [2024-10-09 03:20:50.469889] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:07.228 null0 00:18:07.228 [2024-10-09 03:20:50.522216] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:07.487 [2024-10-09 03:20:50.546425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:07.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:07.487 03:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.487 03:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:18:07.487 03:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:07.487 03:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:07.487 03:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:07.487 03:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:07.487 03:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80357 00:18:07.487 03:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80357 /var/tmp/bperf.sock 00:18:07.487 03:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 80357 ']' 00:18:07.487 03:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:07.487 03:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:07.487 03:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:18:07.487 03:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:07.487 03:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:07.487 03:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:07.487 [2024-10-09 03:20:50.612085] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:18:07.487 [2024-10-09 03:20:50.612456] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80357 ] 00:18:07.487 [2024-10-09 03:20:50.754275] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.746 [2024-10-09 03:20:50.895477] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.746 [2024-10-09 03:20:50.963605] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:08.682 03:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:08.682 03:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:18:08.682 03:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:08.682 03:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:08.682 03:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:08.682 03:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.682 03:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:08.682 03:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.682 03:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:08.682 03:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:09.250 nvme0n1 00:18:09.251 03:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:09.251 03:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.251 03:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:09.251 03:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.251 03:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:09.251 03:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:09.251 Running I/O for 2 seconds... 00:18:09.251 [2024-10-09 03:20:52.451019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.251 [2024-10-09 03:20:52.451303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.251 [2024-10-09 03:20:52.451327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.251 [2024-10-09 03:20:52.470343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.251 [2024-10-09 03:20:52.470514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.251 [2024-10-09 03:20:52.470532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.251 [2024-10-09 03:20:52.488718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.251 [2024-10-09 03:20:52.488757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.251 [2024-10-09 03:20:52.488771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.251 [2024-10-09 03:20:52.507126] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.251 [2024-10-09 03:20:52.507338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.251 [2024-10-09 03:20:52.507356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.251 [2024-10-09 03:20:52.525392] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.251 [2024-10-09 03:20:52.525429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.251 [2024-10-09 03:20:52.525443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.251 [2024-10-09 03:20:52.540755] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.251 [2024-10-09 03:20:52.540790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.251 [2024-10-09 03:20:52.540819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.510 [2024-10-09 03:20:52.555904] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.510 [2024-10-09 03:20:52.555939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.510 [2024-10-09 03:20:52.555967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.510 [2024-10-09 03:20:52.571447] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.510 [2024-10-09 03:20:52.571482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.510 [2024-10-09 03:20:52.571525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.510 [2024-10-09 03:20:52.586668] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.510 [2024-10-09 03:20:52.586860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.510 [2024-10-09 03:20:52.586893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.510 [2024-10-09 03:20:52.602167] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.510 [2024-10-09 03:20:52.602203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.510 [2024-10-09 03:20:52.602216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.510 [2024-10-09 03:20:52.618189] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.510 [2024-10-09 03:20:52.618226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.510 [2024-10-09 03:20:52.618239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.511 [2024-10-09 03:20:52.633547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.511 [2024-10-09 03:20:52.633588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.511 [2024-10-09 03:20:52.633617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.511 [2024-10-09 03:20:52.649483] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.511 [2024-10-09 03:20:52.649518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.511 [2024-10-09 03:20:52.649547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.511 [2024-10-09 03:20:52.665623] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.511 [2024-10-09 03:20:52.665659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.511 [2024-10-09 03:20:52.665687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.511 [2024-10-09 03:20:52.681677] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.511 [2024-10-09 03:20:52.681714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.511 [2024-10-09 03:20:52.681743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.511 [2024-10-09 03:20:52.698732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.511 [2024-10-09 03:20:52.698769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.511 [2024-10-09 03:20:52.698798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.511 [2024-10-09 03:20:52.716879] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.511 [2024-10-09 03:20:52.717165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.511 [2024-10-09 03:20:52.717183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.511 [2024-10-09 03:20:52.734259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.511 [2024-10-09 03:20:52.734297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.511 [2024-10-09 03:20:52.734311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.511 [2024-10-09 03:20:52.750640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.511 [2024-10-09 03:20:52.750676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.511 [2024-10-09 03:20:52.750705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.511 [2024-10-09 03:20:52.766922] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.511 [2024-10-09 03:20:52.766957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.511 [2024-10-09 03:20:52.766986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.511 [2024-10-09 03:20:52.783086] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.511 [2024-10-09 03:20:52.783308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.511 [2024-10-09 03:20:52.783326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.511 [2024-10-09 03:20:52.799485] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.511 [2024-10-09 03:20:52.799522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.511 [2024-10-09 03:20:52.799550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.770 [2024-10-09 03:20:52.815806] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.770 [2024-10-09 03:20:52.815842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.770 [2024-10-09 03:20:52.815871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.770 [2024-10-09 03:20:52.832960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.770 [2024-10-09 03:20:52.833012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.770 [2024-10-09 03:20:52.833042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.770 [2024-10-09 03:20:52.849369] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.770 [2024-10-09 03:20:52.849426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.770 [2024-10-09 03:20:52.849439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.770 [2024-10-09 03:20:52.865280] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.770 [2024-10-09 03:20:52.865315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.770 [2024-10-09 03:20:52.865344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.770 [2024-10-09 03:20:52.881482] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.770 [2024-10-09 03:20:52.881516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.770 [2024-10-09 03:20:52.881545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.770 [2024-10-09 03:20:52.897652] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.770 [2024-10-09 03:20:52.897687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.770 [2024-10-09 03:20:52.897715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.770 [2024-10-09 03:20:52.914042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.770 [2024-10-09 03:20:52.914269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.770 [2024-10-09 03:20:52.914287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.770 [2024-10-09 03:20:52.930795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.770 [2024-10-09 03:20:52.930832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.770 [2024-10-09 03:20:52.930861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.770 [2024-10-09 03:20:52.946950] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.770 [2024-10-09 03:20:52.946987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.770 [2024-10-09 03:20:52.947016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.770 [2024-10-09 03:20:52.963346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.770 [2024-10-09 03:20:52.963414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.770 [2024-10-09 03:20:52.963443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.770 [2024-10-09 03:20:52.979595] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.770 [2024-10-09 03:20:52.979630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.770 [2024-10-09 03:20:52.979659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.770 [2024-10-09 03:20:52.995622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.770 [2024-10-09 03:20:52.995657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.770 [2024-10-09 03:20:52.995686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.770 [2024-10-09 03:20:53.011666] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.770 [2024-10-09 03:20:53.011701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.770 [2024-10-09 03:20:53.011729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.770 [2024-10-09 03:20:53.027980] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.770 [2024-10-09 03:20:53.028034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.770 [2024-10-09 03:20:53.028078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.770 [2024-10-09 03:20:53.044136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.770 [2024-10-09 03:20:53.044172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.770 [2024-10-09 03:20:53.044200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.770 [2024-10-09 03:20:53.060423] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:09.770 [2024-10-09 03:20:53.060457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.770 [2024-10-09 03:20:53.060485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.029 [2024-10-09 03:20:53.076471] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.029 [2024-10-09 03:20:53.076506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.029 [2024-10-09 03:20:53.076535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.029 [2024-10-09 03:20:53.092444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.029 [2024-10-09 03:20:53.092478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.029 [2024-10-09 03:20:53.092506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.029 [2024-10-09 03:20:53.108606] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.029 [2024-10-09 03:20:53.108639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.029 [2024-10-09 03:20:53.108667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.029 [2024-10-09 03:20:53.124040] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.029 [2024-10-09 03:20:53.124082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.029 [2024-10-09 03:20:53.124110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.029 [2024-10-09 03:20:53.138858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.029 [2024-10-09 03:20:53.139085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.029 [2024-10-09 03:20:53.139103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.029 [2024-10-09 03:20:53.153696] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.029 [2024-10-09 03:20:53.153885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.029 [2024-10-09 03:20:53.153917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.029 [2024-10-09 03:20:53.168759] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.029 [2024-10-09 03:20:53.168947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.029 [2024-10-09 03:20:53.168980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.029 [2024-10-09 03:20:53.183739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.029 [2024-10-09 03:20:53.183774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.029 [2024-10-09 03:20:53.183803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.029 [2024-10-09 03:20:53.198530] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.030 [2024-10-09 03:20:53.198719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.030 [2024-10-09 03:20:53.198751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.030 [2024-10-09 03:20:53.213543] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.030 [2024-10-09 03:20:53.213595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.030 [2024-10-09 03:20:53.213624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.030 [2024-10-09 03:20:53.228472] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.030 [2024-10-09 03:20:53.228697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.030 [2024-10-09 03:20:53.228729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.030 [2024-10-09 03:20:53.244962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.030 [2024-10-09 03:20:53.245029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.030 [2024-10-09 03:20:53.245075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.030 [2024-10-09 03:20:53.261436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.030 [2024-10-09 03:20:53.261471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.030 [2024-10-09 03:20:53.261500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.030 [2024-10-09 03:20:53.276997] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.030 [2024-10-09 03:20:53.277034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.030 [2024-10-09 03:20:53.277078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.030 [2024-10-09 03:20:53.292327] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.030 [2024-10-09 03:20:53.292360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.030 [2024-10-09 03:20:53.292388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.030 [2024-10-09 03:20:53.307216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.030 [2024-10-09 03:20:53.307246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.030 [2024-10-09 03:20:53.307258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.030 [2024-10-09 03:20:53.322284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.030 [2024-10-09 03:20:53.322511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.030 [2024-10-09 03:20:53.322528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.289 [2024-10-09 03:20:53.337817] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.289 [2024-10-09 03:20:53.338019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.289 [2024-10-09 03:20:53.338053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.289 [2024-10-09 03:20:53.353248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.289 [2024-10-09 03:20:53.353463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.289 [2024-10-09 03:20:53.353636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.289 [2024-10-09 03:20:53.369023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.289 [2024-10-09 03:20:53.369265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.289 [2024-10-09 03:20:53.369385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.289 [2024-10-09 03:20:53.384570] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.289 [2024-10-09 03:20:53.384805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.289 [2024-10-09 03:20:53.384954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.289 [2024-10-09 03:20:53.400241] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.289 [2024-10-09 03:20:53.400437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.289 [2024-10-09 03:20:53.400557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.289 [2024-10-09 03:20:53.416100] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.289 [2024-10-09 03:20:53.416307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.289 [2024-10-09 03:20:53.416496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.289 15560.00 IOPS, 60.78 MiB/s [2024-10-09T03:20:53.592Z] [2024-10-09 03:20:53.431774] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.289 [2024-10-09 03:20:53.431983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.289 [2024-10-09 03:20:53.432132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.289 [2024-10-09 03:20:53.448302] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.289 [2024-10-09 03:20:53.448513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.289 [2024-10-09 03:20:53.448678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.289 [2024-10-09 03:20:53.473562] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.289 [2024-10-09 03:20:53.473773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.289 [2024-10-09 03:20:53.473900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.289 [2024-10-09 03:20:53.489990] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.289 [2024-10-09 03:20:53.490228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.289 [2024-10-09 03:20:53.490247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.289 [2024-10-09 03:20:53.506321] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.289 [2024-10-09 03:20:53.506570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.289 [2024-10-09 03:20:53.506714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.289 [2024-10-09 03:20:53.522627] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.290 [2024-10-09 03:20:53.522835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.290 [2024-10-09 03:20:53.522981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.290 [2024-10-09 03:20:53.539255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.290 [2024-10-09 03:20:53.539453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.290 [2024-10-09 03:20:53.539628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.290 [2024-10-09 03:20:53.555513] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.290 [2024-10-09 03:20:53.555724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.290 [2024-10-09 03:20:53.555867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.290 [2024-10-09 03:20:53.571789] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.290 [2024-10-09 03:20:53.572037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.290 [2024-10-09 03:20:53.572230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.290 [2024-10-09 03:20:53.587861] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.290 [2024-10-09 03:20:53.588081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.290 [2024-10-09 03:20:53.588202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.549 [2024-10-09 03:20:53.603858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.549 [2024-10-09 03:20:53.604107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.549 [2024-10-09 03:20:53.604241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.549 [2024-10-09 03:20:53.619700] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.549 [2024-10-09 03:20:53.619908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.549 [2024-10-09 03:20:53.620064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.549 [2024-10-09 03:20:53.635580] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.549 [2024-10-09 03:20:53.635822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.549 [2024-10-09 03:20:53.635952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.549 [2024-10-09 03:20:53.651005] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.549 [2024-10-09 03:20:53.651249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.549 [2024-10-09 03:20:53.651422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.549 [2024-10-09 03:20:53.666964] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.549 [2024-10-09 03:20:53.667223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.549 [2024-10-09 03:20:53.667326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.549 [2024-10-09 03:20:53.682309] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.549 [2024-10-09 03:20:53.682541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.549 [2024-10-09 03:20:53.682681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.549 [2024-10-09 03:20:53.697265] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.549 [2024-10-09 03:20:53.697473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.549 [2024-10-09 03:20:53.697616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.549 [2024-10-09 03:20:53.712159] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.549 [2024-10-09 03:20:53.712366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.549 [2024-10-09 03:20:53.712509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.549 [2024-10-09 03:20:53.727838] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.549 [2024-10-09 03:20:53.728052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.549 [2024-10-09 03:20:53.728221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.549 [2024-10-09 03:20:53.744851] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.549 [2024-10-09 03:20:53.745124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.549 [2024-10-09 03:20:53.745243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.549 [2024-10-09 03:20:53.761069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.550 [2024-10-09 03:20:53.761308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.550 [2024-10-09 03:20:53.761418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.550 [2024-10-09 03:20:53.777076] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.550 [2024-10-09 03:20:53.777113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.550 [2024-10-09 03:20:53.777142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.550 [2024-10-09 03:20:53.792111] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.550 [2024-10-09 03:20:53.792152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.550 [2024-10-09 03:20:53.792180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.550 [2024-10-09 03:20:53.806888] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.550 [2024-10-09 03:20:53.807120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.550 [2024-10-09 03:20:53.807138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.550 [2024-10-09 03:20:53.822149] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.550 [2024-10-09 03:20:53.822345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.550 [2024-10-09 03:20:53.822379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.550 [2024-10-09 03:20:53.837085] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.550 [2024-10-09 03:20:53.837120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.550 [2024-10-09 03:20:53.837149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.809 [2024-10-09 03:20:53.852218] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.809 [2024-10-09 03:20:53.852394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.809 [2024-10-09 03:20:53.852428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.809 [2024-10-09 03:20:53.868149] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.809 [2024-10-09 03:20:53.868186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.809 [2024-10-09 03:20:53.868221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.809 [2024-10-09 03:20:53.883499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.809 [2024-10-09 03:20:53.883710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.809 [2024-10-09 03:20:53.883742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.809 [2024-10-09 03:20:53.899273] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.809 [2024-10-09 03:20:53.899310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.809 [2024-10-09 03:20:53.899339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.809 [2024-10-09 03:20:53.914248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.809 [2024-10-09 03:20:53.914462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.809 [2024-10-09 03:20:53.914510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.809 [2024-10-09 03:20:53.929842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.809 [2024-10-09 03:20:53.930033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.809 [2024-10-09 03:20:53.930117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.809 [2024-10-09 03:20:53.945858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.809 [2024-10-09 03:20:53.945896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.809 [2024-10-09 03:20:53.945924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.809 [2024-10-09 03:20:53.961529] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.809 [2024-10-09 03:20:53.961567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.809 [2024-10-09 03:20:53.961596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.809 [2024-10-09 03:20:53.976893] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.809 [2024-10-09 03:20:53.977107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.809 [2024-10-09 03:20:53.977140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.809 [2024-10-09 03:20:53.992644] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.809 [2024-10-09 03:20:53.992681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.809 [2024-10-09 03:20:53.992717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.809 [2024-10-09 03:20:54.008645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.809 [2024-10-09 03:20:54.008681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.809 [2024-10-09 03:20:54.008710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.809 [2024-10-09 03:20:54.023752] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.809 [2024-10-09 03:20:54.023788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.809 [2024-10-09 03:20:54.023816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.809 [2024-10-09 03:20:54.038726] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.810 [2024-10-09 03:20:54.038917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.810 [2024-10-09 03:20:54.038950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.810 [2024-10-09 03:20:54.054029] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.810 [2024-10-09 03:20:54.054287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.810 [2024-10-09 03:20:54.054305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.810 [2024-10-09 03:20:54.069225] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.810 [2024-10-09 03:20:54.069432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.810 [2024-10-09 03:20:54.069578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.810 [2024-10-09 03:20:54.085323] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.810 [2024-10-09 03:20:54.085572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.810 [2024-10-09 03:20:54.085710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.810 [2024-10-09 03:20:54.101829] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:10.810 [2024-10-09 03:20:54.102100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.810 [2024-10-09 03:20:54.102302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.069 [2024-10-09 03:20:54.118558] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:11.069 [2024-10-09 03:20:54.118783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.069 [2024-10-09 03:20:54.118979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.069 [2024-10-09 03:20:54.135252] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:11.069 [2024-10-09 03:20:54.135473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.069 [2024-10-09 03:20:54.135722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.069 [2024-10-09 03:20:54.151379] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:11.069 [2024-10-09 03:20:54.151579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.069 [2024-10-09 03:20:54.151701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.069 [2024-10-09 03:20:54.167512] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:11.069 [2024-10-09 03:20:54.167724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.069 [2024-10-09 03:20:54.167948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.069 [2024-10-09 03:20:54.183485] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:11.069 [2024-10-09 03:20:54.183693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.069 [2024-10-09 03:20:54.183838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.069 [2024-10-09 03:20:54.199841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:11.069 [2024-10-09 03:20:54.200081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.069 [2024-10-09 03:20:54.200196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.069 [2024-10-09 03:20:54.215164] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:11.069 [2024-10-09 03:20:54.215201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.069 [2024-10-09 03:20:54.215230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.069 [2024-10-09 03:20:54.231341] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:11.069 [2024-10-09 03:20:54.231379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.070 [2024-10-09 03:20:54.231408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.070 [2024-10-09 03:20:54.247866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:11.070 [2024-10-09 03:20:54.247905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.070 [2024-10-09 03:20:54.247950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.070 [2024-10-09 03:20:54.264281] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:11.070 [2024-10-09 03:20:54.264317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.070 [2024-10-09 03:20:54.264362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.070 [2024-10-09 03:20:54.280435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:11.070 [2024-10-09 03:20:54.280472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.070 [2024-10-09 03:20:54.280501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.070 [2024-10-09 03:20:54.297513] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:11.070 [2024-10-09 03:20:54.297564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.070 [2024-10-09 03:20:54.297593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.070 [2024-10-09 03:20:54.313580] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:11.070 [2024-10-09 03:20:54.313617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.070 [2024-10-09 03:20:54.313645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.070 [2024-10-09 03:20:54.329537] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:11.070 [2024-10-09 03:20:54.329574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.070 [2024-10-09 03:20:54.329602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.070 [2024-10-09 03:20:54.345576] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:11.070 [2024-10-09 03:20:54.345614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.070 [2024-10-09 03:20:54.345643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.070 [2024-10-09 03:20:54.361682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:11.070 [2024-10-09 03:20:54.361718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.070 [2024-10-09 03:20:54.361747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.329 [2024-10-09 03:20:54.377850] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:11.329 [2024-10-09 03:20:54.377887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.329 [2024-10-09 03:20:54.377916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.329 [2024-10-09 03:20:54.393943] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:11.329 [2024-10-09 03:20:54.393980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.329 [2024-10-09 03:20:54.394008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.329 [2024-10-09 03:20:54.409254] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:11.329 [2024-10-09 03:20:54.409289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.329 [2024-10-09 03:20:54.409318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.329 15750.00 IOPS, 61.52 MiB/s [2024-10-09T03:20:54.632Z] [2024-10-09 03:20:54.425214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2105a80) 00:18:11.329 [2024-10-09 03:20:54.425252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.329 [2024-10-09 03:20:54.425282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.329 00:18:11.329 Latency(us) 00:18:11.329 [2024-10-09T03:20:54.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.329 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:11.329 nvme0n1 : 2.01 15761.22 61.57 0.00 0.00 8114.48 7000.44 32410.53 00:18:11.329 [2024-10-09T03:20:54.632Z] =================================================================================================================== 00:18:11.329 [2024-10-09T03:20:54.632Z] Total : 15761.22 61.57 0.00 0.00 8114.48 7000.44 32410.53 00:18:11.329 { 00:18:11.329 "results": [ 00:18:11.329 { 00:18:11.329 "job": "nvme0n1", 00:18:11.329 "core_mask": "0x2", 00:18:11.329 "workload": "randread", 00:18:11.329 "status": "finished", 00:18:11.329 "queue_depth": 128, 00:18:11.329 "io_size": 4096, 00:18:11.329 "runtime": 2.006698, 00:18:11.329 "iops": 15761.215688658682, 00:18:11.329 "mibps": 61.567248783822976, 00:18:11.329 "io_failed": 0, 00:18:11.329 "io_timeout": 0, 00:18:11.329 "avg_latency_us": 8114.482245191257, 00:18:11.329 "min_latency_us": 7000.436363636363, 00:18:11.329 "max_latency_us": 32410.53090909091 00:18:11.329 } 00:18:11.329 ], 00:18:11.329 "core_count": 1 00:18:11.329 } 00:18:11.329 03:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:11.329 03:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:11.329 | .driver_specific 00:18:11.329 | .nvme_error 00:18:11.329 | .status_code 00:18:11.329 | .command_transient_transport_error' 00:18:11.329 03:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:11.329 03:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:11.588 03:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 124 > 0 )) 00:18:11.588 03:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80357 00:18:11.588 03:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 80357 ']' 00:18:11.588 03:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 80357 00:18:11.588 03:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:18:11.588 03:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:11.588 03:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80357 00:18:11.588 killing process with pid 80357 00:18:11.588 Received shutdown signal, test time was about 2.000000 seconds 00:18:11.588 00:18:11.588 Latency(us) 00:18:11.588 [2024-10-09T03:20:54.891Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.588 [2024-10-09T03:20:54.891Z] =================================================================================================================== 00:18:11.588 [2024-10-09T03:20:54.891Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:11.588 03:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:11.588 03:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:11.588 03:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80357' 00:18:11.588 03:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 80357 00:18:11.588 03:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 80357 00:18:11.847 03:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:18:11.847 03:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:11.847 03:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:11.847 03:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:11.847 03:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:11.847 03:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80417 00:18:11.847 03:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:18:11.847 03:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80417 /var/tmp/bperf.sock 00:18:11.847 03:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 80417 ']' 00:18:11.847 03:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:11.847 03:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:11.847 03:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:11.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:11.847 03:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:11.847 03:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:11.847 [2024-10-09 03:20:55.038438] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:18:11.847 [2024-10-09 03:20:55.038718] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80417 ] 00:18:11.847 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:11.847 Zero copy mechanism will not be used. 00:18:12.106 [2024-10-09 03:20:55.171110] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.106 [2024-10-09 03:20:55.259776] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.106 [2024-10-09 03:20:55.313867] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:13.041 03:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:13.041 03:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:18:13.041 03:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:13.041 03:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:13.300 03:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:13.300 03:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.300 03:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:13.300 03:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.300 03:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:13.300 03:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:13.559 nvme0n1 00:18:13.559 03:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:13.559 03:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.559 03:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:13.559 03:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.559 03:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:13.559 03:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:13.559 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:13.559 Zero copy mechanism will not be used. 00:18:13.559 Running I/O for 2 seconds... 00:18:13.559 [2024-10-09 03:20:56.828517] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.559 [2024-10-09 03:20:56.828583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.559 [2024-10-09 03:20:56.828617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.559 [2024-10-09 03:20:56.833228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.559 [2024-10-09 03:20:56.833267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.559 [2024-10-09 03:20:56.833281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.559 [2024-10-09 03:20:56.837803] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.559 [2024-10-09 03:20:56.837842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.559 [2024-10-09 03:20:56.837873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.559 [2024-10-09 03:20:56.842176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.559 [2024-10-09 03:20:56.842221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.559 [2024-10-09 03:20:56.842253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.559 [2024-10-09 03:20:56.846515] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.559 [2024-10-09 03:20:56.846556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.559 [2024-10-09 03:20:56.846588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.559 [2024-10-09 03:20:56.850857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.559 [2024-10-09 03:20:56.850895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.559 [2024-10-09 03:20:56.850925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.559 [2024-10-09 03:20:56.855594] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.559 [2024-10-09 03:20:56.855634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.559 [2024-10-09 03:20:56.855665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.559 [2024-10-09 03:20:56.860059] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.559 [2024-10-09 03:20:56.860155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.559 [2024-10-09 03:20:56.860188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.820 [2024-10-09 03:20:56.864908] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.820 [2024-10-09 03:20:56.864946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.820 [2024-10-09 03:20:56.864975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.820 [2024-10-09 03:20:56.869588] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.820 [2024-10-09 03:20:56.869627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.820 [2024-10-09 03:20:56.869658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.820 [2024-10-09 03:20:56.873960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.820 [2024-10-09 03:20:56.873998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.820 [2024-10-09 03:20:56.874027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.820 [2024-10-09 03:20:56.878323] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.820 [2024-10-09 03:20:56.878367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.820 [2024-10-09 03:20:56.878413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.820 [2024-10-09 03:20:56.882732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.820 [2024-10-09 03:20:56.882771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.820 [2024-10-09 03:20:56.882800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.820 [2024-10-09 03:20:56.887160] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.820 [2024-10-09 03:20:56.887197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.820 [2024-10-09 03:20:56.887210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.820 [2024-10-09 03:20:56.891351] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.820 [2024-10-09 03:20:56.891389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.820 [2024-10-09 03:20:56.891420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.820 [2024-10-09 03:20:56.895843] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.820 [2024-10-09 03:20:56.895880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.820 [2024-10-09 03:20:56.895909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.820 [2024-10-09 03:20:56.899997] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.820 [2024-10-09 03:20:56.900035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.820 [2024-10-09 03:20:56.900080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.820 [2024-10-09 03:20:56.904280] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.820 [2024-10-09 03:20:56.904351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.820 [2024-10-09 03:20:56.904382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.820 [2024-10-09 03:20:56.908347] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.820 [2024-10-09 03:20:56.908385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.820 [2024-10-09 03:20:56.908415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.820 [2024-10-09 03:20:56.912630] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.820 [2024-10-09 03:20:56.912697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.820 [2024-10-09 03:20:56.912710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.820 [2024-10-09 03:20:56.916783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.820 [2024-10-09 03:20:56.916822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.820 [2024-10-09 03:20:56.916851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.820 [2024-10-09 03:20:56.920864] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.820 [2024-10-09 03:20:56.920902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.820 [2024-10-09 03:20:56.920931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.820 [2024-10-09 03:20:56.925004] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.820 [2024-10-09 03:20:56.925042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.820 [2024-10-09 03:20:56.925103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.820 [2024-10-09 03:20:56.929113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.820 [2024-10-09 03:20:56.929146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.820 [2024-10-09 03:20:56.929158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.820 [2024-10-09 03:20:56.933242] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.820 [2024-10-09 03:20:56.933279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.820 [2024-10-09 03:20:56.933326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.820 [2024-10-09 03:20:56.937665] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.820 [2024-10-09 03:20:56.937702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.820 [2024-10-09 03:20:56.937732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.820 [2024-10-09 03:20:56.941836] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.820 [2024-10-09 03:20:56.941875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.820 [2024-10-09 03:20:56.941906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.820 [2024-10-09 03:20:56.946222] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.820 [2024-10-09 03:20:56.946263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.820 [2024-10-09 03:20:56.946277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.820 [2024-10-09 03:20:56.950561] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.820 [2024-10-09 03:20:56.950598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.820 [2024-10-09 03:20:56.950629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.821 [2024-10-09 03:20:56.955013] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.821 [2024-10-09 03:20:56.955085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.821 [2024-10-09 03:20:56.955117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.821 [2024-10-09 03:20:56.959394] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.821 [2024-10-09 03:20:56.959628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.821 [2024-10-09 03:20:56.959663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.821 [2024-10-09 03:20:56.963896] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.821 [2024-10-09 03:20:56.963937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.821 [2024-10-09 03:20:56.963967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.821 [2024-10-09 03:20:56.968269] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.821 [2024-10-09 03:20:56.968309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.821 [2024-10-09 03:20:56.968340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.821 [2024-10-09 03:20:56.972552] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.821 [2024-10-09 03:20:56.972605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.821 [2024-10-09 03:20:56.972637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.821 [2024-10-09 03:20:56.976904] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.821 [2024-10-09 03:20:56.976947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.821 [2024-10-09 03:20:56.976993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.821 [2024-10-09 03:20:56.981284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.821 [2024-10-09 03:20:56.981324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.821 [2024-10-09 03:20:56.981355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.821 [2024-10-09 03:20:56.985721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.821 [2024-10-09 03:20:56.985759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.821 [2024-10-09 03:20:56.985788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.821 [2024-10-09 03:20:56.990191] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.821 [2024-10-09 03:20:56.990232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.821 [2024-10-09 03:20:56.990264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.821 [2024-10-09 03:20:56.994599] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.821 [2024-10-09 03:20:56.994637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.821 [2024-10-09 03:20:56.994667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.821 [2024-10-09 03:20:56.998922] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.821 [2024-10-09 03:20:56.998961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.821 [2024-10-09 03:20:56.998991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.821 [2024-10-09 03:20:57.003432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.821 [2024-10-09 03:20:57.003484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.821 [2024-10-09 03:20:57.003520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.821 [2024-10-09 03:20:57.007871] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.821 [2024-10-09 03:20:57.007909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.821 [2024-10-09 03:20:57.007940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.821 [2024-10-09 03:20:57.012217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.821 [2024-10-09 03:20:57.012254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.821 [2024-10-09 03:20:57.012286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.821 [2024-10-09 03:20:57.016531] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.821 [2024-10-09 03:20:57.016586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.821 [2024-10-09 03:20:57.016617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.821 [2024-10-09 03:20:57.020842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.821 [2024-10-09 03:20:57.020883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.821 [2024-10-09 03:20:57.020914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.821 [2024-10-09 03:20:57.025251] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.821 [2024-10-09 03:20:57.025290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.821 [2024-10-09 03:20:57.025320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.821 [2024-10-09 03:20:57.029454] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.821 [2024-10-09 03:20:57.029492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.821 [2024-10-09 03:20:57.029521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.821 [2024-10-09 03:20:57.033646] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.821 [2024-10-09 03:20:57.033684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.821 [2024-10-09 03:20:57.033715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.821 [2024-10-09 03:20:57.038143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.821 [2024-10-09 03:20:57.038183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.821 [2024-10-09 03:20:57.038198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.821 [2024-10-09 03:20:57.042485] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.821 [2024-10-09 03:20:57.042546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.821 [2024-10-09 03:20:57.042576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.821 [2024-10-09 03:20:57.046737] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.821 [2024-10-09 03:20:57.046775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.821 [2024-10-09 03:20:57.046805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.821 [2024-10-09 03:20:57.050978] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.821 [2024-10-09 03:20:57.051017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.821 [2024-10-09 03:20:57.051046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.821 [2024-10-09 03:20:57.055346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.821 [2024-10-09 03:20:57.055384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.821 [2024-10-09 03:20:57.055414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.821 [2024-10-09 03:20:57.059502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.821 [2024-10-09 03:20:57.059539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.821 [2024-10-09 03:20:57.059570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.821 [2024-10-09 03:20:57.063579] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.821 [2024-10-09 03:20:57.063616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.821 [2024-10-09 03:20:57.063647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.821 [2024-10-09 03:20:57.067717] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.821 [2024-10-09 03:20:57.067755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.821 [2024-10-09 03:20:57.067785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.821 [2024-10-09 03:20:57.071812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.821 [2024-10-09 03:20:57.071849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.821 [2024-10-09 03:20:57.071879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.821 [2024-10-09 03:20:57.075947] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.822 [2024-10-09 03:20:57.075985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.822 [2024-10-09 03:20:57.076015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.822 [2024-10-09 03:20:57.080094] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.822 [2024-10-09 03:20:57.080130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.822 [2024-10-09 03:20:57.080160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.822 [2024-10-09 03:20:57.084020] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.822 [2024-10-09 03:20:57.084085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.822 [2024-10-09 03:20:57.084115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.822 [2024-10-09 03:20:57.088067] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.822 [2024-10-09 03:20:57.088104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.822 [2024-10-09 03:20:57.088134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.822 [2024-10-09 03:20:57.092028] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.822 [2024-10-09 03:20:57.092092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.822 [2024-10-09 03:20:57.092105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.822 [2024-10-09 03:20:57.096058] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.822 [2024-10-09 03:20:57.096093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.822 [2024-10-09 03:20:57.096123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.822 [2024-10-09 03:20:57.099979] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.822 [2024-10-09 03:20:57.100017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.822 [2024-10-09 03:20:57.100047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.822 [2024-10-09 03:20:57.103941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.822 [2024-10-09 03:20:57.103979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.822 [2024-10-09 03:20:57.104009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.822 [2024-10-09 03:20:57.107895] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.822 [2024-10-09 03:20:57.107933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.822 [2024-10-09 03:20:57.107964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.822 [2024-10-09 03:20:57.111892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.822 [2024-10-09 03:20:57.111929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.822 [2024-10-09 03:20:57.111960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.822 [2024-10-09 03:20:57.115893] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.822 [2024-10-09 03:20:57.115930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.822 [2024-10-09 03:20:57.115961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.822 [2024-10-09 03:20:57.119895] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:13.822 [2024-10-09 03:20:57.119933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.822 [2024-10-09 03:20:57.119964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.085 [2024-10-09 03:20:57.123912] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.085 [2024-10-09 03:20:57.123949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.085 [2024-10-09 03:20:57.123979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.085 [2024-10-09 03:20:57.127948] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.085 [2024-10-09 03:20:57.127986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.085 [2024-10-09 03:20:57.128015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.085 [2024-10-09 03:20:57.131944] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.085 [2024-10-09 03:20:57.131982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.085 [2024-10-09 03:20:57.132012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.085 [2024-10-09 03:20:57.136024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.085 [2024-10-09 03:20:57.136086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.085 [2024-10-09 03:20:57.136100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.085 [2024-10-09 03:20:57.139988] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.085 [2024-10-09 03:20:57.140025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.085 [2024-10-09 03:20:57.140056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.085 [2024-10-09 03:20:57.143952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.085 [2024-10-09 03:20:57.143990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.085 [2024-10-09 03:20:57.144020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.085 [2024-10-09 03:20:57.147983] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.085 [2024-10-09 03:20:57.148020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.085 [2024-10-09 03:20:57.148051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.085 [2024-10-09 03:20:57.151953] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.085 [2024-10-09 03:20:57.151991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.085 [2024-10-09 03:20:57.152021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.085 [2024-10-09 03:20:57.155920] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.085 [2024-10-09 03:20:57.155959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.085 [2024-10-09 03:20:57.155990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.085 [2024-10-09 03:20:57.160039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.085 [2024-10-09 03:20:57.160085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.085 [2024-10-09 03:20:57.160116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.085 [2024-10-09 03:20:57.164008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.085 [2024-10-09 03:20:57.164082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.085 [2024-10-09 03:20:57.164113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.085 [2024-10-09 03:20:57.168083] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.085 [2024-10-09 03:20:57.168119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.085 [2024-10-09 03:20:57.168147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.085 [2024-10-09 03:20:57.171989] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.085 [2024-10-09 03:20:57.172027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.085 [2024-10-09 03:20:57.172057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.085 [2024-10-09 03:20:57.175947] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.086 [2024-10-09 03:20:57.175984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.086 [2024-10-09 03:20:57.176014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.086 [2024-10-09 03:20:57.179976] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.086 [2024-10-09 03:20:57.180029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.086 [2024-10-09 03:20:57.180060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.086 [2024-10-09 03:20:57.183952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.086 [2024-10-09 03:20:57.183989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.086 [2024-10-09 03:20:57.184020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.086 [2024-10-09 03:20:57.188079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.086 [2024-10-09 03:20:57.188115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.086 [2024-10-09 03:20:57.188145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.086 [2024-10-09 03:20:57.191978] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.086 [2024-10-09 03:20:57.192014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.086 [2024-10-09 03:20:57.192045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.086 [2024-10-09 03:20:57.196016] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.086 [2024-10-09 03:20:57.196086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.086 [2024-10-09 03:20:57.196116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.086 [2024-10-09 03:20:57.200103] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.086 [2024-10-09 03:20:57.200140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.086 [2024-10-09 03:20:57.200170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.086 [2024-10-09 03:20:57.204293] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.086 [2024-10-09 03:20:57.204329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.086 [2024-10-09 03:20:57.204359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.086 [2024-10-09 03:20:57.208555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.086 [2024-10-09 03:20:57.208799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.086 [2024-10-09 03:20:57.208834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.086 [2024-10-09 03:20:57.212948] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.086 [2024-10-09 03:20:57.213002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.086 [2024-10-09 03:20:57.213031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.086 [2024-10-09 03:20:57.217069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.086 [2024-10-09 03:20:57.217131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.086 [2024-10-09 03:20:57.217145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.086 [2024-10-09 03:20:57.221220] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.086 [2024-10-09 03:20:57.221257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.086 [2024-10-09 03:20:57.221287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.086 [2024-10-09 03:20:57.225193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.086 [2024-10-09 03:20:57.225228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.086 [2024-10-09 03:20:57.225259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.086 [2024-10-09 03:20:57.229135] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.086 [2024-10-09 03:20:57.229173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.086 [2024-10-09 03:20:57.229203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.086 [2024-10-09 03:20:57.233000] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.086 [2024-10-09 03:20:57.233037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.086 [2024-10-09 03:20:57.233097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.086 [2024-10-09 03:20:57.237055] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.086 [2024-10-09 03:20:57.237119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.086 [2024-10-09 03:20:57.237132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.086 [2024-10-09 03:20:57.241049] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.086 [2024-10-09 03:20:57.241115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.086 [2024-10-09 03:20:57.241144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.086 [2024-10-09 03:20:57.245004] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.086 [2024-10-09 03:20:57.245101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.086 [2024-10-09 03:20:57.245131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.086 [2024-10-09 03:20:57.249009] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.086 [2024-10-09 03:20:57.249072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.086 [2024-10-09 03:20:57.249086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.086 [2024-10-09 03:20:57.253042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.086 [2024-10-09 03:20:57.253108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.086 [2024-10-09 03:20:57.253138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.086 [2024-10-09 03:20:57.257123] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.086 [2024-10-09 03:20:57.257158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.086 [2024-10-09 03:20:57.257188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.086 [2024-10-09 03:20:57.260997] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.086 [2024-10-09 03:20:57.261034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.086 [2024-10-09 03:20:57.261078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.086 [2024-10-09 03:20:57.264880] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.086 [2024-10-09 03:20:57.264917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.086 [2024-10-09 03:20:57.264948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.086 [2024-10-09 03:20:57.268941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.086 [2024-10-09 03:20:57.268979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.086 [2024-10-09 03:20:57.269010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.086 [2024-10-09 03:20:57.272953] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.086 [2024-10-09 03:20:57.272990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.086 [2024-10-09 03:20:57.273019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.086 [2024-10-09 03:20:57.276925] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.086 [2024-10-09 03:20:57.276962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.086 [2024-10-09 03:20:57.276991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.086 [2024-10-09 03:20:57.280925] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.086 [2024-10-09 03:20:57.280963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.086 [2024-10-09 03:20:57.280992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.086 [2024-10-09 03:20:57.284874] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.086 [2024-10-09 03:20:57.284911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.086 [2024-10-09 03:20:57.284942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.087 [2024-10-09 03:20:57.288868] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.087 [2024-10-09 03:20:57.288906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.087 [2024-10-09 03:20:57.288936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.087 [2024-10-09 03:20:57.292911] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.087 [2024-10-09 03:20:57.292946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.087 [2024-10-09 03:20:57.292976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.087 [2024-10-09 03:20:57.296953] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.087 [2024-10-09 03:20:57.296990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.087 [2024-10-09 03:20:57.297019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.087 [2024-10-09 03:20:57.300938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.087 [2024-10-09 03:20:57.300976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.087 [2024-10-09 03:20:57.301005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.087 [2024-10-09 03:20:57.304921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.087 [2024-10-09 03:20:57.304958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.087 [2024-10-09 03:20:57.304987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.087 [2024-10-09 03:20:57.308856] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.087 [2024-10-09 03:20:57.308894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.087 [2024-10-09 03:20:57.308924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.087 [2024-10-09 03:20:57.312799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.087 [2024-10-09 03:20:57.312835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.087 [2024-10-09 03:20:57.312864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.087 [2024-10-09 03:20:57.316759] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.087 [2024-10-09 03:20:57.316796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.087 [2024-10-09 03:20:57.316824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.087 [2024-10-09 03:20:57.320857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.087 [2024-10-09 03:20:57.320894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.087 [2024-10-09 03:20:57.320925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.087 [2024-10-09 03:20:57.324843] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.087 [2024-10-09 03:20:57.324880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.087 [2024-10-09 03:20:57.324908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.087 [2024-10-09 03:20:57.328819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.087 [2024-10-09 03:20:57.328856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.087 [2024-10-09 03:20:57.328884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.087 [2024-10-09 03:20:57.332893] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.087 [2024-10-09 03:20:57.332930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.087 [2024-10-09 03:20:57.332959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.087 [2024-10-09 03:20:57.336978] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.087 [2024-10-09 03:20:57.337015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.087 [2024-10-09 03:20:57.337045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.087 [2024-10-09 03:20:57.340918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.087 [2024-10-09 03:20:57.340956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.087 [2024-10-09 03:20:57.340986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.087 [2024-10-09 03:20:57.344876] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.087 [2024-10-09 03:20:57.344913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.087 [2024-10-09 03:20:57.344944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.087 [2024-10-09 03:20:57.348933] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.087 [2024-10-09 03:20:57.348971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.087 [2024-10-09 03:20:57.349000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.087 [2024-10-09 03:20:57.352918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.087 [2024-10-09 03:20:57.352955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.087 [2024-10-09 03:20:57.352985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.087 [2024-10-09 03:20:57.357009] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.087 [2024-10-09 03:20:57.357095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.087 [2024-10-09 03:20:57.357109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.087 [2024-10-09 03:20:57.360942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.087 [2024-10-09 03:20:57.360979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.087 [2024-10-09 03:20:57.361009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.087 [2024-10-09 03:20:57.364938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.087 [2024-10-09 03:20:57.364974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.087 [2024-10-09 03:20:57.365005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.087 [2024-10-09 03:20:57.368926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.087 [2024-10-09 03:20:57.368963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.087 [2024-10-09 03:20:57.368992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.087 [2024-10-09 03:20:57.372940] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.087 [2024-10-09 03:20:57.372976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.087 [2024-10-09 03:20:57.373004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.087 [2024-10-09 03:20:57.376972] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.087 [2024-10-09 03:20:57.377007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.087 [2024-10-09 03:20:57.377039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.087 [2024-10-09 03:20:57.380944] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.087 [2024-10-09 03:20:57.380982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.087 [2024-10-09 03:20:57.381011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.361 [2024-10-09 03:20:57.384916] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.361 [2024-10-09 03:20:57.384954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.361 [2024-10-09 03:20:57.384985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.361 [2024-10-09 03:20:57.388908] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.361 [2024-10-09 03:20:57.388945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.361 [2024-10-09 03:20:57.388974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.361 [2024-10-09 03:20:57.392915] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.361 [2024-10-09 03:20:57.392953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.361 [2024-10-09 03:20:57.392984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.361 [2024-10-09 03:20:57.396916] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.361 [2024-10-09 03:20:57.396954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.361 [2024-10-09 03:20:57.396983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.361 [2024-10-09 03:20:57.400862] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.361 [2024-10-09 03:20:57.400898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.361 [2024-10-09 03:20:57.400926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.361 [2024-10-09 03:20:57.404831] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.361 [2024-10-09 03:20:57.404868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.361 [2024-10-09 03:20:57.404898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.361 [2024-10-09 03:20:57.408778] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.361 [2024-10-09 03:20:57.408815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.361 [2024-10-09 03:20:57.408844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.361 [2024-10-09 03:20:57.412780] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.361 [2024-10-09 03:20:57.412818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.361 [2024-10-09 03:20:57.412848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.361 [2024-10-09 03:20:57.416829] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.361 [2024-10-09 03:20:57.416865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.361 [2024-10-09 03:20:57.416896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.361 [2024-10-09 03:20:57.420775] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.361 [2024-10-09 03:20:57.420811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.361 [2024-10-09 03:20:57.420841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.361 [2024-10-09 03:20:57.424836] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.361 [2024-10-09 03:20:57.424872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.361 [2024-10-09 03:20:57.424902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.361 [2024-10-09 03:20:57.428844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.361 [2024-10-09 03:20:57.428882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.361 [2024-10-09 03:20:57.428911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.361 [2024-10-09 03:20:57.432936] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.361 [2024-10-09 03:20:57.432972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.361 [2024-10-09 03:20:57.433002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.361 [2024-10-09 03:20:57.436979] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.361 [2024-10-09 03:20:57.437017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.361 [2024-10-09 03:20:57.437047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.361 [2024-10-09 03:20:57.440983] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.361 [2024-10-09 03:20:57.441021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.361 [2024-10-09 03:20:57.441052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.361 [2024-10-09 03:20:57.445029] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.361 [2024-10-09 03:20:57.445112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.361 [2024-10-09 03:20:57.445127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.361 [2024-10-09 03:20:57.449172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.361 [2024-10-09 03:20:57.449209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.361 [2024-10-09 03:20:57.449240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.361 [2024-10-09 03:20:57.453150] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.361 [2024-10-09 03:20:57.453187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.361 [2024-10-09 03:20:57.453217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.361 [2024-10-09 03:20:57.457027] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.361 [2024-10-09 03:20:57.457110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.361 [2024-10-09 03:20:57.457124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.361 [2024-10-09 03:20:57.460951] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.361 [2024-10-09 03:20:57.460988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.361 [2024-10-09 03:20:57.461018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.361 [2024-10-09 03:20:57.464926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.361 [2024-10-09 03:20:57.464961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.361 [2024-10-09 03:20:57.464990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.361 [2024-10-09 03:20:57.468932] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.361 [2024-10-09 03:20:57.468968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.361 [2024-10-09 03:20:57.468996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.361 [2024-10-09 03:20:57.472885] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.361 [2024-10-09 03:20:57.472922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.361 [2024-10-09 03:20:57.472953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.361 [2024-10-09 03:20:57.476817] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.361 [2024-10-09 03:20:57.476854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.361 [2024-10-09 03:20:57.476883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.361 [2024-10-09 03:20:57.480877] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.361 [2024-10-09 03:20:57.480914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.361 [2024-10-09 03:20:57.480943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.361 [2024-10-09 03:20:57.484775] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.362 [2024-10-09 03:20:57.484811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.362 [2024-10-09 03:20:57.484839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.362 [2024-10-09 03:20:57.488790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.362 [2024-10-09 03:20:57.488827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.362 [2024-10-09 03:20:57.488856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.362 [2024-10-09 03:20:57.492747] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.362 [2024-10-09 03:20:57.492784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.362 [2024-10-09 03:20:57.492814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.362 [2024-10-09 03:20:57.496814] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.362 [2024-10-09 03:20:57.496853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.362 [2024-10-09 03:20:57.496883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.362 [2024-10-09 03:20:57.500824] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.362 [2024-10-09 03:20:57.500861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.362 [2024-10-09 03:20:57.500891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.362 [2024-10-09 03:20:57.504866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.362 [2024-10-09 03:20:57.504903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.362 [2024-10-09 03:20:57.504931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.362 [2024-10-09 03:20:57.508889] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.362 [2024-10-09 03:20:57.508925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.362 [2024-10-09 03:20:57.508956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.362 [2024-10-09 03:20:57.512938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.362 [2024-10-09 03:20:57.512975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.362 [2024-10-09 03:20:57.513004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.362 [2024-10-09 03:20:57.516919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.362 [2024-10-09 03:20:57.516955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.362 [2024-10-09 03:20:57.516986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.362 [2024-10-09 03:20:57.520849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.362 [2024-10-09 03:20:57.520886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.362 [2024-10-09 03:20:57.520916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.362 [2024-10-09 03:20:57.524763] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.362 [2024-10-09 03:20:57.524799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.362 [2024-10-09 03:20:57.524829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.362 [2024-10-09 03:20:57.528803] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.362 [2024-10-09 03:20:57.528840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.362 [2024-10-09 03:20:57.528868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.362 [2024-10-09 03:20:57.532820] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.362 [2024-10-09 03:20:57.532857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.362 [2024-10-09 03:20:57.532886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.362 [2024-10-09 03:20:57.537056] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.362 [2024-10-09 03:20:57.537121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.362 [2024-10-09 03:20:57.537134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.362 [2024-10-09 03:20:57.541308] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.362 [2024-10-09 03:20:57.541342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.362 [2024-10-09 03:20:57.541355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.362 [2024-10-09 03:20:57.545849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.362 [2024-10-09 03:20:57.545887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.362 [2024-10-09 03:20:57.545917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.362 [2024-10-09 03:20:57.550430] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.362 [2024-10-09 03:20:57.550513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.362 [2024-10-09 03:20:57.550543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.362 [2024-10-09 03:20:57.555121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.362 [2024-10-09 03:20:57.555167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.362 [2024-10-09 03:20:57.555182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.362 [2024-10-09 03:20:57.559553] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.362 [2024-10-09 03:20:57.559764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.362 [2024-10-09 03:20:57.559799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.362 [2024-10-09 03:20:57.564272] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.362 [2024-10-09 03:20:57.564314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.362 [2024-10-09 03:20:57.564329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.362 [2024-10-09 03:20:57.568788] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.362 [2024-10-09 03:20:57.568824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.362 [2024-10-09 03:20:57.568855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.362 [2024-10-09 03:20:57.573211] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.362 [2024-10-09 03:20:57.573247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.362 [2024-10-09 03:20:57.573261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.362 [2024-10-09 03:20:57.577432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.362 [2024-10-09 03:20:57.577466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.362 [2024-10-09 03:20:57.577495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.362 [2024-10-09 03:20:57.581535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.362 [2024-10-09 03:20:57.581585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.362 [2024-10-09 03:20:57.581615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.362 [2024-10-09 03:20:57.585578] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.362 [2024-10-09 03:20:57.585613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.362 [2024-10-09 03:20:57.585644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.362 [2024-10-09 03:20:57.589469] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.362 [2024-10-09 03:20:57.589507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.362 [2024-10-09 03:20:57.589537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.362 [2024-10-09 03:20:57.593466] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.362 [2024-10-09 03:20:57.593504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.362 [2024-10-09 03:20:57.593535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.362 [2024-10-09 03:20:57.597392] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.362 [2024-10-09 03:20:57.597428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.362 [2024-10-09 03:20:57.597457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.362 [2024-10-09 03:20:57.601350] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.362 [2024-10-09 03:20:57.601388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.362 [2024-10-09 03:20:57.601419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.363 [2024-10-09 03:20:57.605326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.363 [2024-10-09 03:20:57.605363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.363 [2024-10-09 03:20:57.605393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.363 [2024-10-09 03:20:57.609273] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.363 [2024-10-09 03:20:57.609310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.363 [2024-10-09 03:20:57.609340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.363 [2024-10-09 03:20:57.613194] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.363 [2024-10-09 03:20:57.613230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.363 [2024-10-09 03:20:57.613260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.363 [2024-10-09 03:20:57.617172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.363 [2024-10-09 03:20:57.617207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.363 [2024-10-09 03:20:57.617236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.363 [2024-10-09 03:20:57.621145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.363 [2024-10-09 03:20:57.621181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.363 [2024-10-09 03:20:57.621211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.363 [2024-10-09 03:20:57.625170] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.363 [2024-10-09 03:20:57.625206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.363 [2024-10-09 03:20:57.625235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.363 [2024-10-09 03:20:57.629121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.363 [2024-10-09 03:20:57.629156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.363 [2024-10-09 03:20:57.629186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.363 [2024-10-09 03:20:57.633136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.363 [2024-10-09 03:20:57.633171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.363 [2024-10-09 03:20:57.633201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.363 [2024-10-09 03:20:57.637064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.363 [2024-10-09 03:20:57.637149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.363 [2024-10-09 03:20:57.637164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.363 [2024-10-09 03:20:57.641038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.363 [2024-10-09 03:20:57.641102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.363 [2024-10-09 03:20:57.641115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.363 [2024-10-09 03:20:57.645025] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.363 [2024-10-09 03:20:57.645088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.363 [2024-10-09 03:20:57.645102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.363 [2024-10-09 03:20:57.648978] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.363 [2024-10-09 03:20:57.649016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.363 [2024-10-09 03:20:57.649047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.363 [2024-10-09 03:20:57.652921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.363 [2024-10-09 03:20:57.652959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.363 [2024-10-09 03:20:57.652972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.363 [2024-10-09 03:20:57.656875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.363 [2024-10-09 03:20:57.656913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.363 [2024-10-09 03:20:57.656944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.623 [2024-10-09 03:20:57.660939] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.623 [2024-10-09 03:20:57.660993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.623 [2024-10-09 03:20:57.661022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.623 [2024-10-09 03:20:57.664951] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.623 [2024-10-09 03:20:57.665004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.623 [2024-10-09 03:20:57.665033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.623 [2024-10-09 03:20:57.668999] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.623 [2024-10-09 03:20:57.669036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.623 [2024-10-09 03:20:57.669077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.623 [2024-10-09 03:20:57.672935] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.623 [2024-10-09 03:20:57.672988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.623 [2024-10-09 03:20:57.673019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.623 [2024-10-09 03:20:57.676982] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.623 [2024-10-09 03:20:57.677019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.623 [2024-10-09 03:20:57.677050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.623 [2024-10-09 03:20:57.680949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.623 [2024-10-09 03:20:57.681002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.623 [2024-10-09 03:20:57.681032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.623 [2024-10-09 03:20:57.684953] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.623 [2024-10-09 03:20:57.685005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.623 [2024-10-09 03:20:57.685035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.623 [2024-10-09 03:20:57.688960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.623 [2024-10-09 03:20:57.689012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.623 [2024-10-09 03:20:57.689043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.623 [2024-10-09 03:20:57.692908] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.623 [2024-10-09 03:20:57.692946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.623 [2024-10-09 03:20:57.692991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.623 [2024-10-09 03:20:57.696941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.623 [2024-10-09 03:20:57.696991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.623 [2024-10-09 03:20:57.697022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.623 [2024-10-09 03:20:57.700917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.623 [2024-10-09 03:20:57.700954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.623 [2024-10-09 03:20:57.700984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.623 [2024-10-09 03:20:57.704916] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.623 [2024-10-09 03:20:57.704953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.623 [2024-10-09 03:20:57.704984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.623 [2024-10-09 03:20:57.708856] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.623 [2024-10-09 03:20:57.708893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.623 [2024-10-09 03:20:57.708923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.623 [2024-10-09 03:20:57.712821] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.623 [2024-10-09 03:20:57.712858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.623 [2024-10-09 03:20:57.712889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.623 [2024-10-09 03:20:57.716789] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.623 [2024-10-09 03:20:57.716826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.623 [2024-10-09 03:20:57.716857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.623 [2024-10-09 03:20:57.720778] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.623 [2024-10-09 03:20:57.720817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.623 [2024-10-09 03:20:57.720847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.623 [2024-10-09 03:20:57.724750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.623 [2024-10-09 03:20:57.724786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.623 [2024-10-09 03:20:57.724817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.623 [2024-10-09 03:20:57.728912] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.623 [2024-10-09 03:20:57.728950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.624 [2024-10-09 03:20:57.728995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.624 [2024-10-09 03:20:57.733098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.624 [2024-10-09 03:20:57.733146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.624 [2024-10-09 03:20:57.733177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.624 [2024-10-09 03:20:57.737098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.624 [2024-10-09 03:20:57.737144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.624 [2024-10-09 03:20:57.737175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.624 [2024-10-09 03:20:57.741065] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.624 [2024-10-09 03:20:57.741133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.624 [2024-10-09 03:20:57.741163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.624 [2024-10-09 03:20:57.745078] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.624 [2024-10-09 03:20:57.745143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.624 [2024-10-09 03:20:57.745156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.624 [2024-10-09 03:20:57.749031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.624 [2024-10-09 03:20:57.749096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.624 [2024-10-09 03:20:57.749110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.624 [2024-10-09 03:20:57.752994] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.624 [2024-10-09 03:20:57.753031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.624 [2024-10-09 03:20:57.753061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.624 [2024-10-09 03:20:57.756967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.624 [2024-10-09 03:20:57.757005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.624 [2024-10-09 03:20:57.757036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.624 [2024-10-09 03:20:57.761018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.624 [2024-10-09 03:20:57.761112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.624 [2024-10-09 03:20:57.761127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.624 [2024-10-09 03:20:57.765028] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.624 [2024-10-09 03:20:57.765117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.624 [2024-10-09 03:20:57.765132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.624 [2024-10-09 03:20:57.769073] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.624 [2024-10-09 03:20:57.769138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.624 [2024-10-09 03:20:57.769152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.624 [2024-10-09 03:20:57.773016] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.624 [2024-10-09 03:20:57.773110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.624 [2024-10-09 03:20:57.773124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.624 [2024-10-09 03:20:57.777059] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.624 [2024-10-09 03:20:57.777143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.624 [2024-10-09 03:20:57.777157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.624 [2024-10-09 03:20:57.781009] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.624 [2024-10-09 03:20:57.781083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.624 [2024-10-09 03:20:57.781097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.624 [2024-10-09 03:20:57.785011] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.624 [2024-10-09 03:20:57.785049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.624 [2024-10-09 03:20:57.785093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.624 [2024-10-09 03:20:57.789005] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.624 [2024-10-09 03:20:57.789042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.624 [2024-10-09 03:20:57.789111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.624 [2024-10-09 03:20:57.793064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.624 [2024-10-09 03:20:57.793130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.624 [2024-10-09 03:20:57.793160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.624 [2024-10-09 03:20:57.796979] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.624 [2024-10-09 03:20:57.797016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.624 [2024-10-09 03:20:57.797047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.624 [2024-10-09 03:20:57.800969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.624 [2024-10-09 03:20:57.801021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.624 [2024-10-09 03:20:57.801051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.624 [2024-10-09 03:20:57.804940] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.624 [2024-10-09 03:20:57.804993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.624 [2024-10-09 03:20:57.805024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.624 [2024-10-09 03:20:57.808931] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.624 [2024-10-09 03:20:57.808969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.624 [2024-10-09 03:20:57.808982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.624 [2024-10-09 03:20:57.812962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.624 [2024-10-09 03:20:57.813014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.624 [2024-10-09 03:20:57.813044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.624 [2024-10-09 03:20:57.816941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.624 [2024-10-09 03:20:57.816994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.624 [2024-10-09 03:20:57.817026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.624 7518.00 IOPS, 939.75 MiB/s [2024-10-09T03:20:57.927Z] [2024-10-09 03:20:57.823260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.624 [2024-10-09 03:20:57.823314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.624 [2024-10-09 03:20:57.823345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.624 [2024-10-09 03:20:57.827525] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.624 [2024-10-09 03:20:57.827564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.624 [2024-10-09 03:20:57.827595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.624 [2024-10-09 03:20:57.831969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.624 [2024-10-09 03:20:57.832008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.624 [2024-10-09 03:20:57.832039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.624 [2024-10-09 03:20:57.836670] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.624 [2024-10-09 03:20:57.836708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.624 [2024-10-09 03:20:57.836738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.624 [2024-10-09 03:20:57.841531] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.624 [2024-10-09 03:20:57.841749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.625 [2024-10-09 03:20:57.841768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.625 [2024-10-09 03:20:57.846285] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.625 [2024-10-09 03:20:57.846328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.625 [2024-10-09 03:20:57.846342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.625 [2024-10-09 03:20:57.850918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.625 [2024-10-09 03:20:57.850988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.625 [2024-10-09 03:20:57.851018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.625 [2024-10-09 03:20:57.855372] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.625 [2024-10-09 03:20:57.855409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.625 [2024-10-09 03:20:57.855439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.625 [2024-10-09 03:20:57.859760] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.625 [2024-10-09 03:20:57.859800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.625 [2024-10-09 03:20:57.859830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.625 [2024-10-09 03:20:57.864161] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.625 [2024-10-09 03:20:57.864199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.625 [2024-10-09 03:20:57.864230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.625 [2024-10-09 03:20:57.868626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.625 [2024-10-09 03:20:57.868827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.625 [2024-10-09 03:20:57.868844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.625 [2024-10-09 03:20:57.873104] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.625 [2024-10-09 03:20:57.873166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.625 [2024-10-09 03:20:57.873180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.625 [2024-10-09 03:20:57.877139] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.625 [2024-10-09 03:20:57.877175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.625 [2024-10-09 03:20:57.877205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.625 [2024-10-09 03:20:57.881171] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.625 [2024-10-09 03:20:57.881208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.625 [2024-10-09 03:20:57.881239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.625 [2024-10-09 03:20:57.885086] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.625 [2024-10-09 03:20:57.885122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.625 [2024-10-09 03:20:57.885152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.625 [2024-10-09 03:20:57.889203] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.625 [2024-10-09 03:20:57.889239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.625 [2024-10-09 03:20:57.889269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.625 [2024-10-09 03:20:57.893102] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.625 [2024-10-09 03:20:57.893138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.625 [2024-10-09 03:20:57.893168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.625 [2024-10-09 03:20:57.896992] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.625 [2024-10-09 03:20:57.897029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.625 [2024-10-09 03:20:57.897060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.625 [2024-10-09 03:20:57.900945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.625 [2024-10-09 03:20:57.900983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.625 [2024-10-09 03:20:57.901014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.625 [2024-10-09 03:20:57.904892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.625 [2024-10-09 03:20:57.904929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.625 [2024-10-09 03:20:57.904958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.625 [2024-10-09 03:20:57.908801] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.625 [2024-10-09 03:20:57.908839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.625 [2024-10-09 03:20:57.908870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.625 [2024-10-09 03:20:57.912832] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.625 [2024-10-09 03:20:57.912871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.625 [2024-10-09 03:20:57.912901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.625 [2024-10-09 03:20:57.917096] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.625 [2024-10-09 03:20:57.917162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.625 [2024-10-09 03:20:57.917192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.625 [2024-10-09 03:20:57.921124] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.625 [2024-10-09 03:20:57.921160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.625 [2024-10-09 03:20:57.921190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.885 [2024-10-09 03:20:57.925135] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.885 [2024-10-09 03:20:57.925170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.885 [2024-10-09 03:20:57.925200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.885 [2024-10-09 03:20:57.929074] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.885 [2024-10-09 03:20:57.929150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.885 [2024-10-09 03:20:57.929180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.885 [2024-10-09 03:20:57.932991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.885 [2024-10-09 03:20:57.933029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.885 [2024-10-09 03:20:57.933058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.885 [2024-10-09 03:20:57.937069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.885 [2024-10-09 03:20:57.937133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.885 [2024-10-09 03:20:57.937148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.885 [2024-10-09 03:20:57.941067] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.885 [2024-10-09 03:20:57.941133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.885 [2024-10-09 03:20:57.941163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.885 [2024-10-09 03:20:57.945059] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.885 [2024-10-09 03:20:57.945125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.885 [2024-10-09 03:20:57.945155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.885 [2024-10-09 03:20:57.949221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.885 [2024-10-09 03:20:57.949304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.885 [2024-10-09 03:20:57.949317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.885 [2024-10-09 03:20:57.953257] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.885 [2024-10-09 03:20:57.953293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.885 [2024-10-09 03:20:57.953323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.885 [2024-10-09 03:20:57.957306] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.885 [2024-10-09 03:20:57.957342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.885 [2024-10-09 03:20:57.957388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.885 [2024-10-09 03:20:57.961833] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.885 [2024-10-09 03:20:57.961872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.885 [2024-10-09 03:20:57.961902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.885 [2024-10-09 03:20:57.965808] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.885 [2024-10-09 03:20:57.965845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.885 [2024-10-09 03:20:57.965876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.885 [2024-10-09 03:20:57.969814] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.885 [2024-10-09 03:20:57.969850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.885 [2024-10-09 03:20:57.969881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.885 [2024-10-09 03:20:57.973978] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.885 [2024-10-09 03:20:57.974016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.885 [2024-10-09 03:20:57.974047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.885 [2024-10-09 03:20:57.977965] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.885 [2024-10-09 03:20:57.978001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.885 [2024-10-09 03:20:57.978032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.885 [2024-10-09 03:20:57.982113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.885 [2024-10-09 03:20:57.982150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.885 [2024-10-09 03:20:57.982181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.885 [2024-10-09 03:20:57.986588] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.885 [2024-10-09 03:20:57.986624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.885 [2024-10-09 03:20:57.986655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.885 [2024-10-09 03:20:57.990689] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.885 [2024-10-09 03:20:57.990727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.885 [2024-10-09 03:20:57.990758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.885 [2024-10-09 03:20:57.994892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.885 [2024-10-09 03:20:57.994930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.885 [2024-10-09 03:20:57.994960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.885 [2024-10-09 03:20:57.998983] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.885 [2024-10-09 03:20:57.999021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.885 [2024-10-09 03:20:57.999051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.885 [2024-10-09 03:20:58.003083] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.885 [2024-10-09 03:20:58.003145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.885 [2024-10-09 03:20:58.003159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.885 [2024-10-09 03:20:58.007042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.885 [2024-10-09 03:20:58.007108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.885 [2024-10-09 03:20:58.007138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.885 [2024-10-09 03:20:58.011345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.885 [2024-10-09 03:20:58.011382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.885 [2024-10-09 03:20:58.011411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.886 [2024-10-09 03:20:58.015486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.886 [2024-10-09 03:20:58.015523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.886 [2024-10-09 03:20:58.015554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.886 [2024-10-09 03:20:58.019457] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.886 [2024-10-09 03:20:58.019494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.886 [2024-10-09 03:20:58.019522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.886 [2024-10-09 03:20:58.023343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.886 [2024-10-09 03:20:58.023380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.886 [2024-10-09 03:20:58.023410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.886 [2024-10-09 03:20:58.027232] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.886 [2024-10-09 03:20:58.027267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.886 [2024-10-09 03:20:58.027297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.886 [2024-10-09 03:20:58.031396] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.886 [2024-10-09 03:20:58.031433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.886 [2024-10-09 03:20:58.031463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.886 [2024-10-09 03:20:58.035342] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.886 [2024-10-09 03:20:58.035379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.886 [2024-10-09 03:20:58.035409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.886 [2024-10-09 03:20:58.039670] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.886 [2024-10-09 03:20:58.039708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.886 [2024-10-09 03:20:58.039739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.886 [2024-10-09 03:20:58.043736] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.886 [2024-10-09 03:20:58.043774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.886 [2024-10-09 03:20:58.043804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.886 [2024-10-09 03:20:58.047861] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.886 [2024-10-09 03:20:58.047901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.886 [2024-10-09 03:20:58.047932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.886 [2024-10-09 03:20:58.051945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.886 [2024-10-09 03:20:58.051982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.886 [2024-10-09 03:20:58.052013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.886 [2024-10-09 03:20:58.056071] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.886 [2024-10-09 03:20:58.056135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.886 [2024-10-09 03:20:58.056150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.886 [2024-10-09 03:20:58.060401] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.886 [2024-10-09 03:20:58.060471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.886 [2024-10-09 03:20:58.060503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.886 [2024-10-09 03:20:58.064836] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.886 [2024-10-09 03:20:58.064908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.886 [2024-10-09 03:20:58.064955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.886 [2024-10-09 03:20:58.069384] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.886 [2024-10-09 03:20:58.069423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.886 [2024-10-09 03:20:58.069468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.886 [2024-10-09 03:20:58.073980] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.886 [2024-10-09 03:20:58.074017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.886 [2024-10-09 03:20:58.074046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.886 [2024-10-09 03:20:58.078233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.886 [2024-10-09 03:20:58.078274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.886 [2024-10-09 03:20:58.078288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.886 [2024-10-09 03:20:58.082534] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.886 [2024-10-09 03:20:58.082572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.886 [2024-10-09 03:20:58.082602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.886 [2024-10-09 03:20:58.086866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.886 [2024-10-09 03:20:58.086904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.886 [2024-10-09 03:20:58.086934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.886 [2024-10-09 03:20:58.091374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.886 [2024-10-09 03:20:58.091440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.886 [2024-10-09 03:20:58.091469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.886 [2024-10-09 03:20:58.095614] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.886 [2024-10-09 03:20:58.095651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.886 [2024-10-09 03:20:58.095691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.886 [2024-10-09 03:20:58.099827] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.886 [2024-10-09 03:20:58.099863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.886 [2024-10-09 03:20:58.099893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.886 [2024-10-09 03:20:58.104261] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.886 [2024-10-09 03:20:58.104299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.886 [2024-10-09 03:20:58.104325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.886 [2024-10-09 03:20:58.108414] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.886 [2024-10-09 03:20:58.108451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.886 [2024-10-09 03:20:58.108482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.886 [2024-10-09 03:20:58.112564] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.886 [2024-10-09 03:20:58.112606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.886 [2024-10-09 03:20:58.112620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.886 [2024-10-09 03:20:58.116530] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.886 [2024-10-09 03:20:58.116567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.886 [2024-10-09 03:20:58.116596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.886 [2024-10-09 03:20:58.120583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.886 [2024-10-09 03:20:58.120633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.886 [2024-10-09 03:20:58.120647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.886 [2024-10-09 03:20:58.124808] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.886 [2024-10-09 03:20:58.124846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.886 [2024-10-09 03:20:58.124877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.886 [2024-10-09 03:20:58.128852] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.886 [2024-10-09 03:20:58.128890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.886 [2024-10-09 03:20:58.128920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.886 [2024-10-09 03:20:58.133021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.886 [2024-10-09 03:20:58.133086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.887 [2024-10-09 03:20:58.133116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.887 [2024-10-09 03:20:58.137410] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.887 [2024-10-09 03:20:58.137626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.887 [2024-10-09 03:20:58.137645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.887 [2024-10-09 03:20:58.142147] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.887 [2024-10-09 03:20:58.142187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.887 [2024-10-09 03:20:58.142218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.887 [2024-10-09 03:20:58.146272] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.887 [2024-10-09 03:20:58.146311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.887 [2024-10-09 03:20:58.146343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.887 [2024-10-09 03:20:58.150446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.887 [2024-10-09 03:20:58.150499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.887 [2024-10-09 03:20:58.150530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.887 [2024-10-09 03:20:58.154565] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.887 [2024-10-09 03:20:58.154602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.887 [2024-10-09 03:20:58.154632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.887 [2024-10-09 03:20:58.159032] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.887 [2024-10-09 03:20:58.159116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.887 [2024-10-09 03:20:58.159130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.887 [2024-10-09 03:20:58.163114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.887 [2024-10-09 03:20:58.163151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.887 [2024-10-09 03:20:58.163182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.887 [2024-10-09 03:20:58.167196] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.887 [2024-10-09 03:20:58.167233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.887 [2024-10-09 03:20:58.167263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.887 [2024-10-09 03:20:58.171253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.887 [2024-10-09 03:20:58.171291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.887 [2024-10-09 03:20:58.171321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.887 [2024-10-09 03:20:58.175657] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.887 [2024-10-09 03:20:58.175692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.887 [2024-10-09 03:20:58.175721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.887 [2024-10-09 03:20:58.179843] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.887 [2024-10-09 03:20:58.179881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.887 [2024-10-09 03:20:58.179911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.887 [2024-10-09 03:20:58.183953] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:14.887 [2024-10-09 03:20:58.183993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.887 [2024-10-09 03:20:58.184024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.147 [2024-10-09 03:20:58.188040] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.147 [2024-10-09 03:20:58.188122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.147 [2024-10-09 03:20:58.188152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.147 [2024-10-09 03:20:58.192195] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.147 [2024-10-09 03:20:58.192233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.147 [2024-10-09 03:20:58.192263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.147 [2024-10-09 03:20:58.196302] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.147 [2024-10-09 03:20:58.196338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.147 [2024-10-09 03:20:58.196368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.147 [2024-10-09 03:20:58.200536] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.147 [2024-10-09 03:20:58.200750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.147 [2024-10-09 03:20:58.200786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.147 [2024-10-09 03:20:58.204924] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.147 [2024-10-09 03:20:58.204963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.147 [2024-10-09 03:20:58.204993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.147 [2024-10-09 03:20:58.208957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.147 [2024-10-09 03:20:58.208995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.147 [2024-10-09 03:20:58.209026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.147 [2024-10-09 03:20:58.213383] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.147 [2024-10-09 03:20:58.213420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.147 [2024-10-09 03:20:58.213451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.147 [2024-10-09 03:20:58.217446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.147 [2024-10-09 03:20:58.217484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.147 [2024-10-09 03:20:58.217515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.147 [2024-10-09 03:20:58.221509] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.147 [2024-10-09 03:20:58.221547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.147 [2024-10-09 03:20:58.221593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.147 [2024-10-09 03:20:58.225744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.147 [2024-10-09 03:20:58.225782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.147 [2024-10-09 03:20:58.225813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.147 [2024-10-09 03:20:58.230267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.147 [2024-10-09 03:20:58.230307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.147 [2024-10-09 03:20:58.230322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.147 [2024-10-09 03:20:58.234705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.147 [2024-10-09 03:20:58.234743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.147 [2024-10-09 03:20:58.234774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.147 [2024-10-09 03:20:58.238945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.147 [2024-10-09 03:20:58.238982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.147 [2024-10-09 03:20:58.239014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.147 [2024-10-09 03:20:58.243165] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.147 [2024-10-09 03:20:58.243201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.147 [2024-10-09 03:20:58.243231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.147 [2024-10-09 03:20:58.247313] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.147 [2024-10-09 03:20:58.247351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.147 [2024-10-09 03:20:58.247382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.147 [2024-10-09 03:20:58.251354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.148 [2024-10-09 03:20:58.251392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.148 [2024-10-09 03:20:58.251422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.148 [2024-10-09 03:20:58.255408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.148 [2024-10-09 03:20:58.255445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.148 [2024-10-09 03:20:58.255475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.148 [2024-10-09 03:20:58.259683] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.148 [2024-10-09 03:20:58.259721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.148 [2024-10-09 03:20:58.259752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.148 [2024-10-09 03:20:58.263894] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.148 [2024-10-09 03:20:58.263931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.148 [2024-10-09 03:20:58.263962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.148 [2024-10-09 03:20:58.268223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.148 [2024-10-09 03:20:58.268261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.148 [2024-10-09 03:20:58.268291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.148 [2024-10-09 03:20:58.272334] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.148 [2024-10-09 03:20:58.272371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.148 [2024-10-09 03:20:58.272402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.148 [2024-10-09 03:20:58.276475] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.148 [2024-10-09 03:20:58.276513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.148 [2024-10-09 03:20:58.276542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.148 [2024-10-09 03:20:58.280510] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.148 [2024-10-09 03:20:58.280549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.148 [2024-10-09 03:20:58.280580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.148 [2024-10-09 03:20:58.284532] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.148 [2024-10-09 03:20:58.284569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.148 [2024-10-09 03:20:58.284599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.148 [2024-10-09 03:20:58.288534] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.148 [2024-10-09 03:20:58.288572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.148 [2024-10-09 03:20:58.288602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.148 [2024-10-09 03:20:58.292467] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.148 [2024-10-09 03:20:58.292504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.148 [2024-10-09 03:20:58.292534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.148 [2024-10-09 03:20:58.296456] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.148 [2024-10-09 03:20:58.296493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.148 [2024-10-09 03:20:58.296522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.148 [2024-10-09 03:20:58.300496] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.148 [2024-10-09 03:20:58.300533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.148 [2024-10-09 03:20:58.300563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.148 [2024-10-09 03:20:58.304494] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.148 [2024-10-09 03:20:58.304531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.148 [2024-10-09 03:20:58.304559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.148 [2024-10-09 03:20:58.308456] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.148 [2024-10-09 03:20:58.308493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.148 [2024-10-09 03:20:58.308521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.148 [2024-10-09 03:20:58.312424] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.148 [2024-10-09 03:20:58.312461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.148 [2024-10-09 03:20:58.312490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.148 [2024-10-09 03:20:58.316339] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.148 [2024-10-09 03:20:58.316375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.148 [2024-10-09 03:20:58.316405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.148 [2024-10-09 03:20:58.320308] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.148 [2024-10-09 03:20:58.320344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.148 [2024-10-09 03:20:58.320374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.148 [2024-10-09 03:20:58.324233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.148 [2024-10-09 03:20:58.324269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.148 [2024-10-09 03:20:58.324299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.148 [2024-10-09 03:20:58.328163] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.148 [2024-10-09 03:20:58.328210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.148 [2024-10-09 03:20:58.328242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.148 [2024-10-09 03:20:58.332131] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.148 [2024-10-09 03:20:58.332168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.148 [2024-10-09 03:20:58.332198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.148 [2024-10-09 03:20:58.336205] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.148 [2024-10-09 03:20:58.336241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.148 [2024-10-09 03:20:58.336271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.148 [2024-10-09 03:20:58.340190] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.148 [2024-10-09 03:20:58.340226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.148 [2024-10-09 03:20:58.340256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.148 [2024-10-09 03:20:58.344156] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.148 [2024-10-09 03:20:58.344192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.148 [2024-10-09 03:20:58.344221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.148 [2024-10-09 03:20:58.348115] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.148 [2024-10-09 03:20:58.348150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.148 [2024-10-09 03:20:58.348181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.148 [2024-10-09 03:20:58.352169] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.148 [2024-10-09 03:20:58.352205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.148 [2024-10-09 03:20:58.352235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.148 [2024-10-09 03:20:58.356122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.148 [2024-10-09 03:20:58.356158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.148 [2024-10-09 03:20:58.356188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.148 [2024-10-09 03:20:58.360079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.148 [2024-10-09 03:20:58.360113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.148 [2024-10-09 03:20:58.360143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.148 [2024-10-09 03:20:58.364036] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.148 [2024-10-09 03:20:58.364102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.148 [2024-10-09 03:20:58.364132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.148 [2024-10-09 03:20:58.368078] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.149 [2024-10-09 03:20:58.368113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.149 [2024-10-09 03:20:58.368142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.149 [2024-10-09 03:20:58.372181] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.149 [2024-10-09 03:20:58.372216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.149 [2024-10-09 03:20:58.372247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.149 [2024-10-09 03:20:58.376124] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.149 [2024-10-09 03:20:58.376160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.149 [2024-10-09 03:20:58.376190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.149 [2024-10-09 03:20:58.380023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.149 [2024-10-09 03:20:58.380106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.149 [2024-10-09 03:20:58.380121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.149 [2024-10-09 03:20:58.383979] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.149 [2024-10-09 03:20:58.384016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.149 [2024-10-09 03:20:58.384045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.149 [2024-10-09 03:20:58.388022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.149 [2024-10-09 03:20:58.388089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.149 [2024-10-09 03:20:58.388120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.149 [2024-10-09 03:20:58.392057] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.149 [2024-10-09 03:20:58.392105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.149 [2024-10-09 03:20:58.392135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.149 [2024-10-09 03:20:58.396076] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.149 [2024-10-09 03:20:58.396112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.149 [2024-10-09 03:20:58.396142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.149 [2024-10-09 03:20:58.400027] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.149 [2024-10-09 03:20:58.400093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.149 [2024-10-09 03:20:58.400123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.149 [2024-10-09 03:20:58.403964] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.149 [2024-10-09 03:20:58.404002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.149 [2024-10-09 03:20:58.404033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.149 [2024-10-09 03:20:58.407946] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.149 [2024-10-09 03:20:58.407982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.149 [2024-10-09 03:20:58.408013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.149 [2024-10-09 03:20:58.411960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.149 [2024-10-09 03:20:58.411997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.149 [2024-10-09 03:20:58.412028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.149 [2024-10-09 03:20:58.416015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.149 [2024-10-09 03:20:58.416098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.149 [2024-10-09 03:20:58.416128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.149 [2024-10-09 03:20:58.420001] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.149 [2024-10-09 03:20:58.420037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.149 [2024-10-09 03:20:58.420099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.149 [2024-10-09 03:20:58.423977] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.149 [2024-10-09 03:20:58.424014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.149 [2024-10-09 03:20:58.424046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.149 [2024-10-09 03:20:58.427951] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.149 [2024-10-09 03:20:58.427988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.149 [2024-10-09 03:20:58.428018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.149 [2024-10-09 03:20:58.432104] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.149 [2024-10-09 03:20:58.432140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.149 [2024-10-09 03:20:58.432180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.149 [2024-10-09 03:20:58.436134] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.149 [2024-10-09 03:20:58.436170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.149 [2024-10-09 03:20:58.436200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.149 [2024-10-09 03:20:58.440152] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.149 [2024-10-09 03:20:58.440187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.149 [2024-10-09 03:20:58.440217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.149 [2024-10-09 03:20:58.444058] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.149 [2024-10-09 03:20:58.444105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.149 [2024-10-09 03:20:58.444134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.409 [2024-10-09 03:20:58.448080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.409 [2024-10-09 03:20:58.448128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.409 [2024-10-09 03:20:58.448158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.409 [2024-10-09 03:20:58.452052] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.409 [2024-10-09 03:20:58.452119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.409 [2024-10-09 03:20:58.452149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.409 [2024-10-09 03:20:58.456073] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.409 [2024-10-09 03:20:58.456120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.409 [2024-10-09 03:20:58.456151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.409 [2024-10-09 03:20:58.460208] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.409 [2024-10-09 03:20:58.460244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.409 [2024-10-09 03:20:58.460275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.409 [2024-10-09 03:20:58.464306] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.409 [2024-10-09 03:20:58.464344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.409 [2024-10-09 03:20:58.464357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.409 [2024-10-09 03:20:58.468411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.409 [2024-10-09 03:20:58.468477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.409 [2024-10-09 03:20:58.468490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.409 [2024-10-09 03:20:58.472626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.409 [2024-10-09 03:20:58.472664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.409 [2024-10-09 03:20:58.472695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.409 [2024-10-09 03:20:58.476631] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.409 [2024-10-09 03:20:58.476668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.409 [2024-10-09 03:20:58.476698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.409 [2024-10-09 03:20:58.480704] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.409 [2024-10-09 03:20:58.480742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.409 [2024-10-09 03:20:58.480772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.409 [2024-10-09 03:20:58.484755] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.409 [2024-10-09 03:20:58.484793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.410 [2024-10-09 03:20:58.484823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.410 [2024-10-09 03:20:58.488832] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.410 [2024-10-09 03:20:58.488870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.410 [2024-10-09 03:20:58.488900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.410 [2024-10-09 03:20:58.492891] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.410 [2024-10-09 03:20:58.492929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.410 [2024-10-09 03:20:58.492958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.410 [2024-10-09 03:20:58.497000] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.410 [2024-10-09 03:20:58.497039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.410 [2024-10-09 03:20:58.497078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.410 [2024-10-09 03:20:58.501026] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.410 [2024-10-09 03:20:58.501090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.410 [2024-10-09 03:20:58.501103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.410 [2024-10-09 03:20:58.505083] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.410 [2024-10-09 03:20:58.505120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.410 [2024-10-09 03:20:58.505150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.410 [2024-10-09 03:20:58.509073] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.410 [2024-10-09 03:20:58.509109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.410 [2024-10-09 03:20:58.509139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.410 [2024-10-09 03:20:58.513137] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.410 [2024-10-09 03:20:58.513174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.410 [2024-10-09 03:20:58.513204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.410 [2024-10-09 03:20:58.517115] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.410 [2024-10-09 03:20:58.517151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.410 [2024-10-09 03:20:58.517182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.410 [2024-10-09 03:20:58.521175] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.410 [2024-10-09 03:20:58.521212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.410 [2024-10-09 03:20:58.521242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.410 [2024-10-09 03:20:58.525197] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.410 [2024-10-09 03:20:58.525235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.410 [2024-10-09 03:20:58.525265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.410 [2024-10-09 03:20:58.529206] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.410 [2024-10-09 03:20:58.529242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.410 [2024-10-09 03:20:58.529272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.410 [2024-10-09 03:20:58.533239] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.410 [2024-10-09 03:20:58.533275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.410 [2024-10-09 03:20:58.533304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.410 [2024-10-09 03:20:58.537379] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.410 [2024-10-09 03:20:58.537416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.410 [2024-10-09 03:20:58.537447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.410 [2024-10-09 03:20:58.541426] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.410 [2024-10-09 03:20:58.541463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.410 [2024-10-09 03:20:58.541494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.410 [2024-10-09 03:20:58.545372] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.410 [2024-10-09 03:20:58.545409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.410 [2024-10-09 03:20:58.545440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.410 [2024-10-09 03:20:58.549350] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.410 [2024-10-09 03:20:58.549387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.410 [2024-10-09 03:20:58.549418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.410 [2024-10-09 03:20:58.553373] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.410 [2024-10-09 03:20:58.553410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.410 [2024-10-09 03:20:58.553441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.410 [2024-10-09 03:20:58.557631] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.410 [2024-10-09 03:20:58.557667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.410 [2024-10-09 03:20:58.557698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.410 [2024-10-09 03:20:58.561853] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.410 [2024-10-09 03:20:58.561890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.410 [2024-10-09 03:20:58.561921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.410 [2024-10-09 03:20:58.566342] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.410 [2024-10-09 03:20:58.566381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.410 [2024-10-09 03:20:58.566396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.410 [2024-10-09 03:20:58.571186] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.410 [2024-10-09 03:20:58.571250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.410 [2024-10-09 03:20:58.571264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.410 [2024-10-09 03:20:58.575757] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.410 [2024-10-09 03:20:58.575793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.410 [2024-10-09 03:20:58.575824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.410 [2024-10-09 03:20:58.580116] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.410 [2024-10-09 03:20:58.580195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.410 [2024-10-09 03:20:58.580210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.410 [2024-10-09 03:20:58.584448] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.410 [2024-10-09 03:20:58.584483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.410 [2024-10-09 03:20:58.584514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.410 [2024-10-09 03:20:58.588738] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.410 [2024-10-09 03:20:58.588773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.410 [2024-10-09 03:20:58.588804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.410 [2024-10-09 03:20:58.592938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.410 [2024-10-09 03:20:58.592976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.410 [2024-10-09 03:20:58.593006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.410 [2024-10-09 03:20:58.597032] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.410 [2024-10-09 03:20:58.597091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.410 [2024-10-09 03:20:58.597104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.410 [2024-10-09 03:20:58.601158] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.410 [2024-10-09 03:20:58.601194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.410 [2024-10-09 03:20:58.601207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.411 [2024-10-09 03:20:58.605321] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.411 [2024-10-09 03:20:58.605357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.411 [2024-10-09 03:20:58.605388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.411 [2024-10-09 03:20:58.609508] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.411 [2024-10-09 03:20:58.609545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.411 [2024-10-09 03:20:58.609575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.411 [2024-10-09 03:20:58.613785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.411 [2024-10-09 03:20:58.613822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.411 [2024-10-09 03:20:58.613852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.411 [2024-10-09 03:20:58.618606] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.411 [2024-10-09 03:20:58.618648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.411 [2024-10-09 03:20:58.618663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.411 [2024-10-09 03:20:58.623375] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.411 [2024-10-09 03:20:58.623627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.411 [2024-10-09 03:20:58.623758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.411 [2024-10-09 03:20:58.627879] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.411 [2024-10-09 03:20:58.628132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.411 [2024-10-09 03:20:58.628320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.411 [2024-10-09 03:20:58.632505] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.411 [2024-10-09 03:20:58.632717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.411 [2024-10-09 03:20:58.632867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.411 [2024-10-09 03:20:58.637131] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.411 [2024-10-09 03:20:58.637330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.411 [2024-10-09 03:20:58.637530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.411 [2024-10-09 03:20:58.641611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.411 [2024-10-09 03:20:58.641841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.411 [2024-10-09 03:20:58.642024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.411 [2024-10-09 03:20:58.646179] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.411 [2024-10-09 03:20:58.646368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.411 [2024-10-09 03:20:58.646538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.411 [2024-10-09 03:20:58.650766] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.411 [2024-10-09 03:20:58.650805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.411 [2024-10-09 03:20:58.650836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.411 [2024-10-09 03:20:58.654852] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.411 [2024-10-09 03:20:58.654891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.411 [2024-10-09 03:20:58.654920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.411 [2024-10-09 03:20:58.658937] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.411 [2024-10-09 03:20:58.658975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.411 [2024-10-09 03:20:58.659005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.411 [2024-10-09 03:20:58.663019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.411 [2024-10-09 03:20:58.663100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.411 [2024-10-09 03:20:58.663114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.411 [2024-10-09 03:20:58.667077] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.411 [2024-10-09 03:20:58.667140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.411 [2024-10-09 03:20:58.667154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.411 [2024-10-09 03:20:58.671093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.411 [2024-10-09 03:20:58.671131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.411 [2024-10-09 03:20:58.671161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.411 [2024-10-09 03:20:58.675129] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.411 [2024-10-09 03:20:58.675167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.411 [2024-10-09 03:20:58.675197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.411 [2024-10-09 03:20:58.679145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.411 [2024-10-09 03:20:58.679197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.411 [2024-10-09 03:20:58.679228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.411 [2024-10-09 03:20:58.683205] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.411 [2024-10-09 03:20:58.683244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.411 [2024-10-09 03:20:58.683274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.411 [2024-10-09 03:20:58.687273] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.411 [2024-10-09 03:20:58.687311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.411 [2024-10-09 03:20:58.687342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.411 [2024-10-09 03:20:58.691268] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.411 [2024-10-09 03:20:58.691305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.411 [2024-10-09 03:20:58.691336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.411 [2024-10-09 03:20:58.695301] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.411 [2024-10-09 03:20:58.695339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.411 [2024-10-09 03:20:58.695370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.411 [2024-10-09 03:20:58.699371] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.411 [2024-10-09 03:20:58.699409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.411 [2024-10-09 03:20:58.699439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.411 [2024-10-09 03:20:58.703347] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.411 [2024-10-09 03:20:58.703385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.411 [2024-10-09 03:20:58.703416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.411 [2024-10-09 03:20:58.707401] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.411 [2024-10-09 03:20:58.707440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.411 [2024-10-09 03:20:58.707470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.670 [2024-10-09 03:20:58.711436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.670 [2024-10-09 03:20:58.711488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.670 [2024-10-09 03:20:58.711518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.670 [2024-10-09 03:20:58.715482] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.670 [2024-10-09 03:20:58.715519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.670 [2024-10-09 03:20:58.715550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.670 [2024-10-09 03:20:58.719473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.670 [2024-10-09 03:20:58.719510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.670 [2024-10-09 03:20:58.719540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.670 [2024-10-09 03:20:58.723400] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.670 [2024-10-09 03:20:58.723438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.670 [2024-10-09 03:20:58.723468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.670 [2024-10-09 03:20:58.727388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.671 [2024-10-09 03:20:58.727424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.671 [2024-10-09 03:20:58.727455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.671 [2024-10-09 03:20:58.731417] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.671 [2024-10-09 03:20:58.731468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.671 [2024-10-09 03:20:58.731498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.671 [2024-10-09 03:20:58.735474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.671 [2024-10-09 03:20:58.735510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.671 [2024-10-09 03:20:58.735542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.671 [2024-10-09 03:20:58.739393] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.671 [2024-10-09 03:20:58.739431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.671 [2024-10-09 03:20:58.739460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.671 [2024-10-09 03:20:58.743381] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.671 [2024-10-09 03:20:58.743418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.671 [2024-10-09 03:20:58.743449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.671 [2024-10-09 03:20:58.747530] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.671 [2024-10-09 03:20:58.747585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.671 [2024-10-09 03:20:58.747615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.671 [2024-10-09 03:20:58.751544] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.671 [2024-10-09 03:20:58.751598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.671 [2024-10-09 03:20:58.751628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.671 [2024-10-09 03:20:58.755502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.671 [2024-10-09 03:20:58.755540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.671 [2024-10-09 03:20:58.755570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.671 [2024-10-09 03:20:58.759468] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.671 [2024-10-09 03:20:58.759520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.671 [2024-10-09 03:20:58.759551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.671 [2024-10-09 03:20:58.763415] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.671 [2024-10-09 03:20:58.763468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.671 [2024-10-09 03:20:58.763499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.671 [2024-10-09 03:20:58.767486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.671 [2024-10-09 03:20:58.767523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.671 [2024-10-09 03:20:58.767553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.671 [2024-10-09 03:20:58.771622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.671 [2024-10-09 03:20:58.771660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.671 [2024-10-09 03:20:58.771691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.671 [2024-10-09 03:20:58.775659] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.671 [2024-10-09 03:20:58.775697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.671 [2024-10-09 03:20:58.775728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.671 [2024-10-09 03:20:58.779755] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.671 [2024-10-09 03:20:58.779795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.671 [2024-10-09 03:20:58.779826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.671 [2024-10-09 03:20:58.783994] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.671 [2024-10-09 03:20:58.784032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.671 [2024-10-09 03:20:58.784078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.671 [2024-10-09 03:20:58.788108] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.671 [2024-10-09 03:20:58.788145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.671 [2024-10-09 03:20:58.788176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.671 [2024-10-09 03:20:58.792284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.671 [2024-10-09 03:20:58.792322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.671 [2024-10-09 03:20:58.792352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.671 [2024-10-09 03:20:58.796390] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.671 [2024-10-09 03:20:58.796428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.671 [2024-10-09 03:20:58.796459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.671 [2024-10-09 03:20:58.800648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.671 [2024-10-09 03:20:58.800686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.671 [2024-10-09 03:20:58.800716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.671 [2024-10-09 03:20:58.804850] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.671 [2024-10-09 03:20:58.804888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.671 [2024-10-09 03:20:58.804918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.671 [2024-10-09 03:20:58.809100] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.671 [2024-10-09 03:20:58.809136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.671 [2024-10-09 03:20:58.809166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.671 [2024-10-09 03:20:58.813262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.671 [2024-10-09 03:20:58.813299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.671 [2024-10-09 03:20:58.813329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.671 [2024-10-09 03:20:58.817344] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.671 [2024-10-09 03:20:58.817379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.671 [2024-10-09 03:20:58.817392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.671 7494.50 IOPS, 936.81 MiB/s [2024-10-09T03:20:58.974Z] [2024-10-09 03:20:58.822754] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8070e0) 00:18:15.671 [2024-10-09 03:20:58.822788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.671 [2024-10-09 03:20:58.822818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.671 00:18:15.671 Latency(us) 00:18:15.671 [2024-10-09T03:20:58.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.671 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:15.671 nvme0n1 : 2.00 7493.71 936.71 0.00 0.00 2131.81 1720.32 6136.55 00:18:15.671 [2024-10-09T03:20:58.974Z] =================================================================================================================== 00:18:15.671 [2024-10-09T03:20:58.974Z] Total : 7493.71 936.71 0.00 0.00 2131.81 1720.32 6136.55 00:18:15.671 { 00:18:15.671 "results": [ 00:18:15.671 { 00:18:15.671 "job": "nvme0n1", 00:18:15.671 "core_mask": "0x2", 00:18:15.671 "workload": "randread", 00:18:15.671 "status": "finished", 00:18:15.671 "queue_depth": 16, 00:18:15.671 "io_size": 131072, 00:18:15.671 "runtime": 2.002347, 00:18:15.671 "iops": 7493.70613584958, 00:18:15.671 "mibps": 936.7132669811975, 00:18:15.671 "io_failed": 0, 00:18:15.671 "io_timeout": 0, 00:18:15.671 "avg_latency_us": 2131.814203023235, 00:18:15.671 "min_latency_us": 1720.32, 00:18:15.671 "max_latency_us": 6136.552727272728 00:18:15.671 } 00:18:15.671 ], 00:18:15.671 "core_count": 1 00:18:15.671 } 00:18:15.672 03:20:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:15.672 03:20:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:15.672 | .driver_specific 00:18:15.672 | .nvme_error 00:18:15.672 | .status_code 00:18:15.672 | .command_transient_transport_error' 00:18:15.672 03:20:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:15.672 03:20:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:15.930 03:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 484 > 0 )) 00:18:15.930 03:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80417 00:18:15.930 03:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 80417 ']' 00:18:15.930 03:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 80417 00:18:15.930 03:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:18:15.930 03:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:15.930 03:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80417 00:18:15.930 killing process with pid 80417 00:18:15.930 Received shutdown signal, test time was about 2.000000 seconds 00:18:15.930 00:18:15.930 Latency(us) 00:18:15.930 [2024-10-09T03:20:59.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.930 [2024-10-09T03:20:59.233Z] =================================================================================================================== 00:18:15.930 [2024-10-09T03:20:59.233Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:15.930 03:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:15.930 03:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:15.930 03:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80417' 00:18:15.930 03:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 80417 00:18:15.930 03:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 80417 00:18:16.189 03:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:18:16.189 03:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:16.189 03:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:16.189 03:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:16.189 03:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:16.189 03:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80476 00:18:16.189 03:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80476 /var/tmp/bperf.sock 00:18:16.189 03:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:18:16.189 03:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 80476 ']' 00:18:16.189 03:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:16.189 03:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:16.189 03:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:16.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:16.189 03:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:16.189 03:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:16.189 [2024-10-09 03:20:59.440881] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:18:16.189 [2024-10-09 03:20:59.441147] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80476 ] 00:18:16.447 [2024-10-09 03:20:59.575008] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.447 [2024-10-09 03:20:59.680174] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.447 [2024-10-09 03:20:59.733881] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:17.382 03:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:17.382 03:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:18:17.382 03:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:17.382 03:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:17.382 03:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:17.382 03:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.382 03:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:17.382 03:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.383 03:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:17.383 03:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:17.949 nvme0n1 00:18:17.949 03:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:17.949 03:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.949 03:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:17.949 03:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.949 03:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:17.949 03:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:17.949 Running I/O for 2 seconds... 00:18:17.949 [2024-10-09 03:21:01.105799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:17.949 [2024-10-09 03:21:01.108513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.949 [2024-10-09 03:21:01.108708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.949 [2024-10-09 03:21:01.121238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198feb58 00:18:17.949 [2024-10-09 03:21:01.123794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.949 [2024-10-09 03:21:01.123995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:17.949 [2024-10-09 03:21:01.136303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fe2e8 00:18:17.949 [2024-10-09 03:21:01.138684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.949 [2024-10-09 03:21:01.138855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:17.949 [2024-10-09 03:21:01.150939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fda78 00:18:17.949 [2024-10-09 03:21:01.153324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.949 [2024-10-09 03:21:01.153509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:17.949 [2024-10-09 03:21:01.166168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fd208 00:18:17.949 [2024-10-09 03:21:01.168540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.949 [2024-10-09 03:21:01.168727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:17.949 [2024-10-09 03:21:01.180972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fc998 00:18:17.950 [2024-10-09 03:21:01.183467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.950 [2024-10-09 03:21:01.183657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:17.950 [2024-10-09 03:21:01.195829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fc128 00:18:17.950 [2024-10-09 03:21:01.198259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.950 [2024-10-09 03:21:01.198452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:17.950 [2024-10-09 03:21:01.210791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fb8b8 00:18:17.950 [2024-10-09 03:21:01.213125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.950 [2024-10-09 03:21:01.213298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:17.950 [2024-10-09 03:21:01.225455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fb048 00:18:17.950 [2024-10-09 03:21:01.227861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.950 [2024-10-09 03:21:01.228038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:17.950 [2024-10-09 03:21:01.240474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fa7d8 00:18:17.950 [2024-10-09 03:21:01.242882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.950 [2024-10-09 03:21:01.243084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:18.209 [2024-10-09 03:21:01.255339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f9f68 00:18:18.209 [2024-10-09 03:21:01.257692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.209 [2024-10-09 03:21:01.257868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:18.209 [2024-10-09 03:21:01.270241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f96f8 00:18:18.209 [2024-10-09 03:21:01.272384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.209 [2024-10-09 03:21:01.272419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:18.209 [2024-10-09 03:21:01.284571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f8e88 00:18:18.209 [2024-10-09 03:21:01.286799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.209 [2024-10-09 03:21:01.286970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:18.209 [2024-10-09 03:21:01.299218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f8618 00:18:18.209 [2024-10-09 03:21:01.301290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.209 [2024-10-09 03:21:01.301457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:18.209 [2024-10-09 03:21:01.314061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f7da8 00:18:18.209 [2024-10-09 03:21:01.316194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.209 [2024-10-09 03:21:01.316334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:18.209 [2024-10-09 03:21:01.328743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f7538 00:18:18.209 [2024-10-09 03:21:01.331047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.209 [2024-10-09 03:21:01.331266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:18.209 [2024-10-09 03:21:01.343630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f6cc8 00:18:18.209 [2024-10-09 03:21:01.345841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.209 [2024-10-09 03:21:01.346042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.209 [2024-10-09 03:21:01.358772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f6458 00:18:18.209 [2024-10-09 03:21:01.360974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.209 [2024-10-09 03:21:01.361186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:18.209 [2024-10-09 03:21:01.373743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f5be8 00:18:18.209 [2024-10-09 03:21:01.375952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.209 [2024-10-09 03:21:01.376173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:18.209 [2024-10-09 03:21:01.388626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f5378 00:18:18.209 [2024-10-09 03:21:01.390853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.209 [2024-10-09 03:21:01.391075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:18.209 [2024-10-09 03:21:01.403831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f4b08 00:18:18.209 [2024-10-09 03:21:01.405902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.209 [2024-10-09 03:21:01.406140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:18.209 [2024-10-09 03:21:01.418999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f4298 00:18:18.209 [2024-10-09 03:21:01.421179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.209 [2024-10-09 03:21:01.421367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:18.209 [2024-10-09 03:21:01.434276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f3a28 00:18:18.209 [2024-10-09 03:21:01.436481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.209 [2024-10-09 03:21:01.436665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:18.209 [2024-10-09 03:21:01.449345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f31b8 00:18:18.209 [2024-10-09 03:21:01.451459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.209 [2024-10-09 03:21:01.451654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:18.209 [2024-10-09 03:21:01.464706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f2948 00:18:18.209 [2024-10-09 03:21:01.466754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.209 [2024-10-09 03:21:01.466923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:18.209 [2024-10-09 03:21:01.480351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f20d8 00:18:18.209 [2024-10-09 03:21:01.482499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.209 [2024-10-09 03:21:01.482535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:18.209 [2024-10-09 03:21:01.495609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f1868 00:18:18.209 [2024-10-09 03:21:01.497586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.209 [2024-10-09 03:21:01.497619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:18.468 [2024-10-09 03:21:01.510773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f0ff8 00:18:18.469 [2024-10-09 03:21:01.512918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.469 [2024-10-09 03:21:01.512953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:18.469 [2024-10-09 03:21:01.525918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f0788 00:18:18.469 [2024-10-09 03:21:01.528189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.469 [2024-10-09 03:21:01.528231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:18.469 [2024-10-09 03:21:01.541428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198eff18 00:18:18.469 [2024-10-09 03:21:01.543357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.469 [2024-10-09 03:21:01.543392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:18.469 [2024-10-09 03:21:01.556416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198ef6a8 00:18:18.469 [2024-10-09 03:21:01.558321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.469 [2024-10-09 03:21:01.558534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:18.469 [2024-10-09 03:21:01.571344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198eee38 00:18:18.469 [2024-10-09 03:21:01.573191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.469 [2024-10-09 03:21:01.573224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:18.469 [2024-10-09 03:21:01.585839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198ee5c8 00:18:18.469 [2024-10-09 03:21:01.587693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.469 [2024-10-09 03:21:01.587727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.469 [2024-10-09 03:21:01.600216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198edd58 00:18:18.469 [2024-10-09 03:21:01.601972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.469 [2024-10-09 03:21:01.602005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:18.469 [2024-10-09 03:21:01.614870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198ed4e8 00:18:18.469 [2024-10-09 03:21:01.616989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.469 [2024-10-09 03:21:01.617024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:18.469 [2024-10-09 03:21:01.631355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198ecc78 00:18:18.469 [2024-10-09 03:21:01.633332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.469 [2024-10-09 03:21:01.633370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:18.469 [2024-10-09 03:21:01.648028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198ec408 00:18:18.469 [2024-10-09 03:21:01.650196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.469 [2024-10-09 03:21:01.650234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:18.469 [2024-10-09 03:21:01.664865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198ebb98 00:18:18.469 [2024-10-09 03:21:01.666920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.469 [2024-10-09 03:21:01.667200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:18.469 [2024-10-09 03:21:01.681912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198eb328 00:18:18.469 [2024-10-09 03:21:01.683786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.469 [2024-10-09 03:21:01.683820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:18.469 [2024-10-09 03:21:01.698381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198eaab8 00:18:18.469 [2024-10-09 03:21:01.700471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.469 [2024-10-09 03:21:01.700505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:18.469 [2024-10-09 03:21:01.714662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198ea248 00:18:18.469 [2024-10-09 03:21:01.716424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.469 [2024-10-09 03:21:01.716458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:18.469 [2024-10-09 03:21:01.730007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e99d8 00:18:18.469 [2024-10-09 03:21:01.731942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.469 [2024-10-09 03:21:01.731978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:18.469 [2024-10-09 03:21:01.746149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e9168 00:18:18.469 [2024-10-09 03:21:01.747886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.469 [2024-10-09 03:21:01.747922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:18.469 [2024-10-09 03:21:01.762068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e88f8 00:18:18.469 [2024-10-09 03:21:01.763761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.469 [2024-10-09 03:21:01.763797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:18.728 [2024-10-09 03:21:01.777761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e8088 00:18:18.728 [2024-10-09 03:21:01.779510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.728 [2024-10-09 03:21:01.779547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:18.728 [2024-10-09 03:21:01.793273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e7818 00:18:18.728 [2024-10-09 03:21:01.794931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.728 [2024-10-09 03:21:01.795160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:18.728 [2024-10-09 03:21:01.808874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e6fa8 00:18:18.728 [2024-10-09 03:21:01.810557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.728 [2024-10-09 03:21:01.810750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:18.728 [2024-10-09 03:21:01.824456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e6738 00:18:18.728 [2024-10-09 03:21:01.826305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.728 [2024-10-09 03:21:01.826537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:18.728 [2024-10-09 03:21:01.840648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e5ec8 00:18:18.728 [2024-10-09 03:21:01.842441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.728 [2024-10-09 03:21:01.842655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.728 [2024-10-09 03:21:01.856562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e5658 00:18:18.728 [2024-10-09 03:21:01.858352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.728 [2024-10-09 03:21:01.858579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:18.728 [2024-10-09 03:21:01.872874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e4de8 00:18:18.728 [2024-10-09 03:21:01.874714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.728 [2024-10-09 03:21:01.874932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:18.728 [2024-10-09 03:21:01.889057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e4578 00:18:18.728 [2024-10-09 03:21:01.890836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.729 [2024-10-09 03:21:01.891075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:18.729 [2024-10-09 03:21:01.904546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e3d08 00:18:18.729 [2024-10-09 03:21:01.906240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.729 [2024-10-09 03:21:01.906432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:18.729 [2024-10-09 03:21:01.920057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e3498 00:18:18.729 [2024-10-09 03:21:01.921677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.729 [2024-10-09 03:21:01.921885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:18.729 [2024-10-09 03:21:01.936653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e2c28 00:18:18.729 [2024-10-09 03:21:01.938228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.729 [2024-10-09 03:21:01.938268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:18.729 [2024-10-09 03:21:01.953196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e23b8 00:18:18.729 [2024-10-09 03:21:01.954771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.729 [2024-10-09 03:21:01.954810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:18.729 [2024-10-09 03:21:01.969747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e1b48 00:18:18.729 [2024-10-09 03:21:01.971374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.729 [2024-10-09 03:21:01.971409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:18.729 [2024-10-09 03:21:01.985017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e12d8 00:18:18.729 [2024-10-09 03:21:01.986534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.729 [2024-10-09 03:21:01.986725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:18.729 [2024-10-09 03:21:01.999782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e0a68 00:18:18.729 [2024-10-09 03:21:02.001344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.729 [2024-10-09 03:21:02.001538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:18.729 [2024-10-09 03:21:02.014745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e01f8 00:18:18.729 [2024-10-09 03:21:02.016312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.729 [2024-10-09 03:21:02.016516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:18.729 [2024-10-09 03:21:02.029619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198df988 00:18:18.988 [2024-10-09 03:21:02.031178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.988 [2024-10-09 03:21:02.031380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:18.988 [2024-10-09 03:21:02.045838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198df118 00:18:18.988 [2024-10-09 03:21:02.047435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.988 [2024-10-09 03:21:02.047677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:18.988 [2024-10-09 03:21:02.062763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198de8a8 00:18:18.988 [2024-10-09 03:21:02.064351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.988 [2024-10-09 03:21:02.064563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:18.988 [2024-10-09 03:21:02.079202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198de038 00:18:18.988 [2024-10-09 03:21:02.080703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.988 [2024-10-09 03:21:02.080899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:18.988 16320.00 IOPS, 63.75 MiB/s [2024-10-09T03:21:02.291Z] [2024-10-09 03:21:02.102566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198de038 00:18:18.988 [2024-10-09 03:21:02.105140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.988 [2024-10-09 03:21:02.105348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.988 [2024-10-09 03:21:02.118544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198de8a8 00:18:18.988 [2024-10-09 03:21:02.121482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.988 [2024-10-09 03:21:02.121517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:18.988 [2024-10-09 03:21:02.134746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198df118 00:18:18.988 [2024-10-09 03:21:02.137155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.988 [2024-10-09 03:21:02.137189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:18.988 [2024-10-09 03:21:02.149600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198df988 00:18:18.988 [2024-10-09 03:21:02.151920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.988 [2024-10-09 03:21:02.151956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:18.988 [2024-10-09 03:21:02.164651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e01f8 00:18:18.988 [2024-10-09 03:21:02.167198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.988 [2024-10-09 03:21:02.167229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:18.988 [2024-10-09 03:21:02.179631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e0a68 00:18:18.988 [2024-10-09 03:21:02.181918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.988 [2024-10-09 03:21:02.181952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:18.988 [2024-10-09 03:21:02.194810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e12d8 00:18:18.988 [2024-10-09 03:21:02.197363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.988 [2024-10-09 03:21:02.197396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:18.988 [2024-10-09 03:21:02.210280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e1b48 00:18:18.988 [2024-10-09 03:21:02.212726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.988 [2024-10-09 03:21:02.212760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:18.988 [2024-10-09 03:21:02.226279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e23b8 00:18:18.988 [2024-10-09 03:21:02.228724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.988 [2024-10-09 03:21:02.228760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:18.988 [2024-10-09 03:21:02.242761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e2c28 00:18:18.988 [2024-10-09 03:21:02.245130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.988 [2024-10-09 03:21:02.245166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:18.988 [2024-10-09 03:21:02.259368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e3498 00:18:18.988 [2024-10-09 03:21:02.262004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.988 [2024-10-09 03:21:02.262039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:18.988 [2024-10-09 03:21:02.276263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e3d08 00:18:18.988 [2024-10-09 03:21:02.278722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.988 [2024-10-09 03:21:02.278760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:19.247 [2024-10-09 03:21:02.292613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e4578 00:18:19.247 [2024-10-09 03:21:02.295020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.247 [2024-10-09 03:21:02.295076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:19.247 [2024-10-09 03:21:02.307551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e4de8 00:18:19.247 [2024-10-09 03:21:02.309942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.247 [2024-10-09 03:21:02.309970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:19.248 [2024-10-09 03:21:02.322325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e5658 00:18:19.248 [2024-10-09 03:21:02.324827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.248 [2024-10-09 03:21:02.324860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:19.248 [2024-10-09 03:21:02.337296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e5ec8 00:18:19.248 [2024-10-09 03:21:02.339732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.248 [2024-10-09 03:21:02.339765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:19.248 [2024-10-09 03:21:02.352492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e6738 00:18:19.248 [2024-10-09 03:21:02.354712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.248 [2024-10-09 03:21:02.354915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:19.248 [2024-10-09 03:21:02.368258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e6fa8 00:18:19.248 [2024-10-09 03:21:02.370435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.248 [2024-10-09 03:21:02.370473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:19.248 [2024-10-09 03:21:02.383366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e7818 00:18:19.248 [2024-10-09 03:21:02.385422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.248 [2024-10-09 03:21:02.385454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:19.248 [2024-10-09 03:21:02.397833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e8088 00:18:19.248 [2024-10-09 03:21:02.399909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.248 [2024-10-09 03:21:02.399943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:19.248 [2024-10-09 03:21:02.412326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e88f8 00:18:19.248 [2024-10-09 03:21:02.414355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.248 [2024-10-09 03:21:02.414576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:19.248 [2024-10-09 03:21:02.426997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e9168 00:18:19.248 [2024-10-09 03:21:02.429308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.248 [2024-10-09 03:21:02.429341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:19.248 [2024-10-09 03:21:02.441860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198e99d8 00:18:19.248 [2024-10-09 03:21:02.443898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.248 [2024-10-09 03:21:02.443930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:19.248 [2024-10-09 03:21:02.456455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198ea248 00:18:19.248 [2024-10-09 03:21:02.458431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.248 [2024-10-09 03:21:02.458619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:19.248 [2024-10-09 03:21:02.471218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198eaab8 00:18:19.248 [2024-10-09 03:21:02.473159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.248 [2024-10-09 03:21:02.473326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:19.248 [2024-10-09 03:21:02.485849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198eb328 00:18:19.248 [2024-10-09 03:21:02.487858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.248 [2024-10-09 03:21:02.487893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:19.248 [2024-10-09 03:21:02.500653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198ebb98 00:18:19.248 [2024-10-09 03:21:02.502700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.248 [2024-10-09 03:21:02.502890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:19.248 [2024-10-09 03:21:02.515447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198ec408 00:18:19.248 [2024-10-09 03:21:02.517316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.248 [2024-10-09 03:21:02.517349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:19.248 [2024-10-09 03:21:02.529823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198ecc78 00:18:19.248 [2024-10-09 03:21:02.531763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.248 [2024-10-09 03:21:02.531933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:19.248 [2024-10-09 03:21:02.544620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198ed4e8 00:18:19.248 [2024-10-09 03:21:02.546712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.248 [2024-10-09 03:21:02.546921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:19.506 [2024-10-09 03:21:02.559832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198edd58 00:18:19.506 [2024-10-09 03:21:02.561734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.506 [2024-10-09 03:21:02.561934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:19.506 [2024-10-09 03:21:02.574746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198ee5c8 00:18:19.506 [2024-10-09 03:21:02.576738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.506 [2024-10-09 03:21:02.576941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:19.507 [2024-10-09 03:21:02.589712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198eee38 00:18:19.507 [2024-10-09 03:21:02.591658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.507 [2024-10-09 03:21:02.591849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:19.507 [2024-10-09 03:21:02.604600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198ef6a8 00:18:19.507 [2024-10-09 03:21:02.606568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.507 [2024-10-09 03:21:02.606778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:19.507 [2024-10-09 03:21:02.619659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198eff18 00:18:19.507 [2024-10-09 03:21:02.621537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.507 [2024-10-09 03:21:02.621741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:19.507 [2024-10-09 03:21:02.634705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f0788 00:18:19.507 [2024-10-09 03:21:02.636697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.507 [2024-10-09 03:21:02.636871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:19.507 [2024-10-09 03:21:02.651289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f0ff8 00:18:19.507 [2024-10-09 03:21:02.653154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.507 [2024-10-09 03:21:02.653190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:19.507 [2024-10-09 03:21:02.667872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f1868 00:18:19.507 [2024-10-09 03:21:02.669803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.507 [2024-10-09 03:21:02.669832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:19.507 [2024-10-09 03:21:02.683684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f20d8 00:18:19.507 [2024-10-09 03:21:02.685389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.507 [2024-10-09 03:21:02.685422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:19.507 [2024-10-09 03:21:02.698165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f2948 00:18:19.507 [2024-10-09 03:21:02.699919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.507 [2024-10-09 03:21:02.699953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:19.507 [2024-10-09 03:21:02.712861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f31b8 00:18:19.507 [2024-10-09 03:21:02.714633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.507 [2024-10-09 03:21:02.714820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:19.507 [2024-10-09 03:21:02.727441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f3a28 00:18:19.507 [2024-10-09 03:21:02.729226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.507 [2024-10-09 03:21:02.729426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:19.507 [2024-10-09 03:21:02.742358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f4298 00:18:19.507 [2024-10-09 03:21:02.744165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.507 [2024-10-09 03:21:02.744368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:19.507 [2024-10-09 03:21:02.757225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f4b08 00:18:19.507 [2024-10-09 03:21:02.758998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.507 [2024-10-09 03:21:02.759243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:19.507 [2024-10-09 03:21:02.772137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f5378 00:18:19.507 [2024-10-09 03:21:02.773857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.507 [2024-10-09 03:21:02.774110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:19.507 [2024-10-09 03:21:02.786959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f5be8 00:18:19.507 [2024-10-09 03:21:02.788679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.507 [2024-10-09 03:21:02.788880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:19.507 [2024-10-09 03:21:02.801689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f6458 00:18:19.507 [2024-10-09 03:21:02.803398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.507 [2024-10-09 03:21:02.803604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:19.766 [2024-10-09 03:21:02.816432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f6cc8 00:18:19.766 [2024-10-09 03:21:02.818172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.766 [2024-10-09 03:21:02.818356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:19.766 [2024-10-09 03:21:02.831610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f7538 00:18:19.766 [2024-10-09 03:21:02.833113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.766 [2024-10-09 03:21:02.833146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:19.766 [2024-10-09 03:21:02.846644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f7da8 00:18:19.766 [2024-10-09 03:21:02.848467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.766 [2024-10-09 03:21:02.848500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:19.766 [2024-10-09 03:21:02.861742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f8618 00:18:19.766 [2024-10-09 03:21:02.863353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.766 [2024-10-09 03:21:02.863383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:19.766 [2024-10-09 03:21:02.876822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f8e88 00:18:19.766 [2024-10-09 03:21:02.878525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.766 [2024-10-09 03:21:02.878715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:19.766 [2024-10-09 03:21:02.893760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f96f8 00:18:19.766 [2024-10-09 03:21:02.895442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.766 [2024-10-09 03:21:02.895494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:19.766 [2024-10-09 03:21:02.910781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198f9f68 00:18:19.766 [2024-10-09 03:21:02.912271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.766 [2024-10-09 03:21:02.912306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:19.766 [2024-10-09 03:21:02.926343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fa7d8 00:18:19.766 [2024-10-09 03:21:02.927885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.766 [2024-10-09 03:21:02.927918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:19.766 [2024-10-09 03:21:02.941532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fb048 00:18:19.766 [2024-10-09 03:21:02.943147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.766 [2024-10-09 03:21:02.943177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:19.766 [2024-10-09 03:21:02.957764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fb8b8 00:18:19.766 [2024-10-09 03:21:02.959289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.766 [2024-10-09 03:21:02.959335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:19.766 [2024-10-09 03:21:02.974644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fc128 00:18:19.766 [2024-10-09 03:21:02.976093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.766 [2024-10-09 03:21:02.976153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:19.766 [2024-10-09 03:21:02.991031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fc998 00:18:19.766 [2024-10-09 03:21:02.992477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.766 [2024-10-09 03:21:02.992511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:19.766 [2024-10-09 03:21:03.006705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fd208 00:18:19.766 [2024-10-09 03:21:03.008328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.766 [2024-10-09 03:21:03.008358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:19.766 [2024-10-09 03:21:03.022262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fda78 00:18:19.766 [2024-10-09 03:21:03.023661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.766 [2024-10-09 03:21:03.023695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:19.766 [2024-10-09 03:21:03.037516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fe2e8 00:18:19.766 [2024-10-09 03:21:03.038885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.766 [2024-10-09 03:21:03.039101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:19.766 [2024-10-09 03:21:03.052977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198feb58 00:18:19.766 [2024-10-09 03:21:03.054391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.766 [2024-10-09 03:21:03.054542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:20.024 [2024-10-09 03:21:03.074374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:20.024 [2024-10-09 03:21:03.077067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.024 [2024-10-09 03:21:03.077101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.024 16383.00 IOPS, 64.00 MiB/s 00:18:20.024 Latency(us) 00:18:20.024 [2024-10-09T03:21:03.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.024 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:20.024 nvme0n1 : 2.00 16413.17 64.11 0.00 0.00 7792.12 6851.49 28955.00 00:18:20.024 [2024-10-09T03:21:03.327Z] =================================================================================================================== 00:18:20.024 [2024-10-09T03:21:03.327Z] Total : 16413.17 64.11 0.00 0.00 7792.12 6851.49 28955.00 00:18:20.024 { 00:18:20.024 "results": [ 00:18:20.024 { 00:18:20.024 "job": "nvme0n1", 00:18:20.024 "core_mask": "0x2", 00:18:20.024 "workload": "randwrite", 00:18:20.024 "status": "finished", 00:18:20.024 "queue_depth": 128, 00:18:20.024 "io_size": 4096, 00:18:20.024 "runtime": 2.004122, 00:18:20.024 "iops": 16413.1724515773, 00:18:20.024 "mibps": 64.11395488897382, 00:18:20.024 "io_failed": 0, 00:18:20.024 "io_timeout": 0, 00:18:20.024 "avg_latency_us": 7792.117633610993, 00:18:20.024 "min_latency_us": 6851.490909090909, 00:18:20.024 "max_latency_us": 28954.996363636365 00:18:20.024 } 00:18:20.024 ], 00:18:20.024 "core_count": 1 00:18:20.024 } 00:18:20.024 03:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:20.024 03:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:20.024 03:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:20.024 03:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:20.024 | .driver_specific 00:18:20.024 | .nvme_error 00:18:20.024 | .status_code 00:18:20.024 | .command_transient_transport_error' 00:18:20.283 03:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 128 > 0 )) 00:18:20.283 03:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80476 00:18:20.283 03:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 80476 ']' 00:18:20.283 03:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 80476 00:18:20.283 03:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:18:20.283 03:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:20.283 03:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80476 00:18:20.283 killing process with pid 80476 00:18:20.283 Received shutdown signal, test time was about 2.000000 seconds 00:18:20.283 00:18:20.283 Latency(us) 00:18:20.283 [2024-10-09T03:21:03.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.283 [2024-10-09T03:21:03.586Z] =================================================================================================================== 00:18:20.283 [2024-10-09T03:21:03.586Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:20.283 03:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:20.283 03:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:20.283 03:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80476' 00:18:20.283 03:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 80476 00:18:20.283 03:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 80476 00:18:20.541 03:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:18:20.541 03:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:20.541 03:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:20.541 03:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:20.541 03:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:20.541 03:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80532 00:18:20.541 03:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:18:20.541 03:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80532 /var/tmp/bperf.sock 00:18:20.541 03:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 80532 ']' 00:18:20.541 03:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:20.541 03:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:20.541 03:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:20.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:20.541 03:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:20.541 03:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:20.541 [2024-10-09 03:21:03.689970] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:18:20.541 [2024-10-09 03:21:03.690244] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80532 ] 00:18:20.541 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:20.541 Zero copy mechanism will not be used. 00:18:20.541 [2024-10-09 03:21:03.823826] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.799 [2024-10-09 03:21:03.910380] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.799 [2024-10-09 03:21:03.964379] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:21.367 03:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:21.367 03:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:18:21.367 03:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:21.367 03:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:21.625 03:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:21.625 03:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.625 03:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:21.625 03:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.625 03:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:21.625 03:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:22.193 nvme0n1 00:18:22.193 03:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:22.193 03:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.193 03:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:22.193 03:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.193 03:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:22.193 03:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:22.193 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:22.193 Zero copy mechanism will not be used. 00:18:22.193 Running I/O for 2 seconds... 00:18:22.193 [2024-10-09 03:21:05.361848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.193 [2024-10-09 03:21:05.362248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.193 [2024-10-09 03:21:05.362279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.193 [2024-10-09 03:21:05.366689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.193 [2024-10-09 03:21:05.366964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.193 [2024-10-09 03:21:05.366989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.193 [2024-10-09 03:21:05.371979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.193 [2024-10-09 03:21:05.372064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.193 [2024-10-09 03:21:05.372103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.193 [2024-10-09 03:21:05.377009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.193 [2024-10-09 03:21:05.377100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.193 [2024-10-09 03:21:05.377123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.193 [2024-10-09 03:21:05.381969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.193 [2024-10-09 03:21:05.382067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.193 [2024-10-09 03:21:05.382128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.193 [2024-10-09 03:21:05.386839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.193 [2024-10-09 03:21:05.387094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.193 [2024-10-09 03:21:05.387133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.193 [2024-10-09 03:21:05.391848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.193 [2024-10-09 03:21:05.391929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.193 [2024-10-09 03:21:05.391952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.193 [2024-10-09 03:21:05.396934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.193 [2024-10-09 03:21:05.397018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.193 [2024-10-09 03:21:05.397041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.193 [2024-10-09 03:21:05.401886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.193 [2024-10-09 03:21:05.401973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.193 [2024-10-09 03:21:05.401996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.193 [2024-10-09 03:21:05.406940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.193 [2024-10-09 03:21:05.407207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.193 [2024-10-09 03:21:05.407232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.193 [2024-10-09 03:21:05.412218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.193 [2024-10-09 03:21:05.412299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.193 [2024-10-09 03:21:05.412321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.193 [2024-10-09 03:21:05.417142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.193 [2024-10-09 03:21:05.417224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.193 [2024-10-09 03:21:05.417246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.193 [2024-10-09 03:21:05.421799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.193 [2024-10-09 03:21:05.422029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.193 [2024-10-09 03:21:05.422065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.193 [2024-10-09 03:21:05.427006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.193 [2024-10-09 03:21:05.427089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.193 [2024-10-09 03:21:05.427129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.193 [2024-10-09 03:21:05.431849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.193 [2024-10-09 03:21:05.431935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.193 [2024-10-09 03:21:05.431957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.193 [2024-10-09 03:21:05.436823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.193 [2024-10-09 03:21:05.436909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.193 [2024-10-09 03:21:05.436931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.193 [2024-10-09 03:21:05.441747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.193 [2024-10-09 03:21:05.441992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.193 [2024-10-09 03:21:05.442016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.193 [2024-10-09 03:21:05.446822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.193 [2024-10-09 03:21:05.446906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.193 [2024-10-09 03:21:05.446928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.193 [2024-10-09 03:21:05.451625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.193 [2024-10-09 03:21:05.451727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.193 [2024-10-09 03:21:05.451750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.193 [2024-10-09 03:21:05.456691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.193 [2024-10-09 03:21:05.456794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.193 [2024-10-09 03:21:05.456818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.193 [2024-10-09 03:21:05.461542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.193 [2024-10-09 03:21:05.461774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.193 [2024-10-09 03:21:05.461804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.193 [2024-10-09 03:21:05.466550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.193 [2024-10-09 03:21:05.466633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.193 [2024-10-09 03:21:05.466655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.193 [2024-10-09 03:21:05.471411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.194 [2024-10-09 03:21:05.471493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.194 [2024-10-09 03:21:05.471515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.194 [2024-10-09 03:21:05.476448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.194 [2024-10-09 03:21:05.476529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.194 [2024-10-09 03:21:05.476551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.194 [2024-10-09 03:21:05.481246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.194 [2024-10-09 03:21:05.481327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.194 [2024-10-09 03:21:05.481349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.194 [2024-10-09 03:21:05.485834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.194 [2024-10-09 03:21:05.485917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.194 [2024-10-09 03:21:05.485950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.194 [2024-10-09 03:21:05.490581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.194 [2024-10-09 03:21:05.490667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.194 [2024-10-09 03:21:05.490689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.454 [2024-10-09 03:21:05.495168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.454 [2024-10-09 03:21:05.495249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.454 [2024-10-09 03:21:05.495271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.454 [2024-10-09 03:21:05.500017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.454 [2024-10-09 03:21:05.500127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.454 [2024-10-09 03:21:05.500150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.454 [2024-10-09 03:21:05.504710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.454 [2024-10-09 03:21:05.504943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.454 [2024-10-09 03:21:05.504987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.454 [2024-10-09 03:21:05.509519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.454 [2024-10-09 03:21:05.509599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.454 [2024-10-09 03:21:05.509620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.454 [2024-10-09 03:21:05.514262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.454 [2024-10-09 03:21:05.514349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.454 [2024-10-09 03:21:05.514373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.454 [2024-10-09 03:21:05.518924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.454 [2024-10-09 03:21:05.519059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.454 [2024-10-09 03:21:05.519083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.454 [2024-10-09 03:21:05.523790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.454 [2024-10-09 03:21:05.523875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.454 [2024-10-09 03:21:05.523897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.454 [2024-10-09 03:21:05.528633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.454 [2024-10-09 03:21:05.528869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.454 [2024-10-09 03:21:05.528905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.454 [2024-10-09 03:21:05.533508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.454 [2024-10-09 03:21:05.533589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.454 [2024-10-09 03:21:05.533611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.454 [2024-10-09 03:21:05.538158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.454 [2024-10-09 03:21:05.538229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.454 [2024-10-09 03:21:05.538253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.454 [2024-10-09 03:21:05.542815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.454 [2024-10-09 03:21:05.542899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.454 [2024-10-09 03:21:05.542920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.454 [2024-10-09 03:21:05.547556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.454 [2024-10-09 03:21:05.547637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.454 [2024-10-09 03:21:05.547659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.454 [2024-10-09 03:21:05.552177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.454 [2024-10-09 03:21:05.552275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.454 [2024-10-09 03:21:05.552298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.454 [2024-10-09 03:21:05.557212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.454 [2024-10-09 03:21:05.557293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.454 [2024-10-09 03:21:05.557314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.454 [2024-10-09 03:21:05.561828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.454 [2024-10-09 03:21:05.561909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.454 [2024-10-09 03:21:05.561929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.454 [2024-10-09 03:21:05.566571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.454 [2024-10-09 03:21:05.566657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.454 [2024-10-09 03:21:05.566689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.454 [2024-10-09 03:21:05.571234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.454 [2024-10-09 03:21:05.571323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.454 [2024-10-09 03:21:05.571344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.454 [2024-10-09 03:21:05.576080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.454 [2024-10-09 03:21:05.576222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.454 [2024-10-09 03:21:05.576245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.454 [2024-10-09 03:21:05.580735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.454 [2024-10-09 03:21:05.580964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.454 [2024-10-09 03:21:05.581009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.454 [2024-10-09 03:21:05.585560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.454 [2024-10-09 03:21:05.585643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.454 [2024-10-09 03:21:05.585664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.454 [2024-10-09 03:21:05.590120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.454 [2024-10-09 03:21:05.590204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.454 [2024-10-09 03:21:05.590226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.454 [2024-10-09 03:21:05.594800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.454 [2024-10-09 03:21:05.594885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.454 [2024-10-09 03:21:05.594907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.454 [2024-10-09 03:21:05.599409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.454 [2024-10-09 03:21:05.599492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.454 [2024-10-09 03:21:05.599513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.454 [2024-10-09 03:21:05.603899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.454 [2024-10-09 03:21:05.603982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.454 [2024-10-09 03:21:05.604003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.454 [2024-10-09 03:21:05.608599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.454 [2024-10-09 03:21:05.608687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.454 [2024-10-09 03:21:05.608710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.454 [2024-10-09 03:21:05.613166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.454 [2024-10-09 03:21:05.613248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.454 [2024-10-09 03:21:05.613269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.454 [2024-10-09 03:21:05.617727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.454 [2024-10-09 03:21:05.617807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.454 [2024-10-09 03:21:05.617829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.454 [2024-10-09 03:21:05.622277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.455 [2024-10-09 03:21:05.622364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.455 [2024-10-09 03:21:05.622387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.455 [2024-10-09 03:21:05.626826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.455 [2024-10-09 03:21:05.626907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.455 [2024-10-09 03:21:05.626929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.455 [2024-10-09 03:21:05.631476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.455 [2024-10-09 03:21:05.631557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.455 [2024-10-09 03:21:05.631579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.455 [2024-10-09 03:21:05.636111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.455 [2024-10-09 03:21:05.636192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.455 [2024-10-09 03:21:05.636213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.455 [2024-10-09 03:21:05.640646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.455 [2024-10-09 03:21:05.640876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.455 [2024-10-09 03:21:05.640907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.455 [2024-10-09 03:21:05.645416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.455 [2024-10-09 03:21:05.645498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.455 [2024-10-09 03:21:05.645519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.455 [2024-10-09 03:21:05.649926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.455 [2024-10-09 03:21:05.650009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.455 [2024-10-09 03:21:05.650030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.455 [2024-10-09 03:21:05.654603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.455 [2024-10-09 03:21:05.654684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.455 [2024-10-09 03:21:05.654705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.455 [2024-10-09 03:21:05.659226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.455 [2024-10-09 03:21:05.659309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.455 [2024-10-09 03:21:05.659330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.455 [2024-10-09 03:21:05.663784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.455 [2024-10-09 03:21:05.663865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.455 [2024-10-09 03:21:05.663886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.455 [2024-10-09 03:21:05.668491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.455 [2024-10-09 03:21:05.668590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.455 [2024-10-09 03:21:05.668613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.455 [2024-10-09 03:21:05.673194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.455 [2024-10-09 03:21:05.673275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.455 [2024-10-09 03:21:05.673297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.455 [2024-10-09 03:21:05.677625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.455 [2024-10-09 03:21:05.677706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.455 [2024-10-09 03:21:05.677727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.455 [2024-10-09 03:21:05.682198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.455 [2024-10-09 03:21:05.682284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.455 [2024-10-09 03:21:05.682306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.455 [2024-10-09 03:21:05.686770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.455 [2024-10-09 03:21:05.686852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.455 [2024-10-09 03:21:05.686874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.455 [2024-10-09 03:21:05.691355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.455 [2024-10-09 03:21:05.691438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.455 [2024-10-09 03:21:05.691460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.455 [2024-10-09 03:21:05.695917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.455 [2024-10-09 03:21:05.695999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.455 [2024-10-09 03:21:05.696020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.455 [2024-10-09 03:21:05.700562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.455 [2024-10-09 03:21:05.700661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.455 [2024-10-09 03:21:05.700683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.455 [2024-10-09 03:21:05.705176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.455 [2024-10-09 03:21:05.705234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.455 [2024-10-09 03:21:05.705255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.455 [2024-10-09 03:21:05.710255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.455 [2024-10-09 03:21:05.710324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.455 [2024-10-09 03:21:05.710348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.455 [2024-10-09 03:21:05.715318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.455 [2024-10-09 03:21:05.715405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.455 [2024-10-09 03:21:05.715444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.455 [2024-10-09 03:21:05.720429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.455 [2024-10-09 03:21:05.720548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.455 [2024-10-09 03:21:05.720572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.455 [2024-10-09 03:21:05.725803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.455 [2024-10-09 03:21:05.725884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.455 [2024-10-09 03:21:05.725907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.455 [2024-10-09 03:21:05.730849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.455 [2024-10-09 03:21:05.730932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.455 [2024-10-09 03:21:05.730953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.455 [2024-10-09 03:21:05.735893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.455 [2024-10-09 03:21:05.736157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.455 [2024-10-09 03:21:05.736182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.455 [2024-10-09 03:21:05.741134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.455 [2024-10-09 03:21:05.741231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.455 [2024-10-09 03:21:05.741254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.455 [2024-10-09 03:21:05.746110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.455 [2024-10-09 03:21:05.746176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.455 [2024-10-09 03:21:05.746200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.455 [2024-10-09 03:21:05.750708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.455 [2024-10-09 03:21:05.750789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.455 [2024-10-09 03:21:05.750811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.780 [2024-10-09 03:21:05.755334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.780 [2024-10-09 03:21:05.755415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.780 [2024-10-09 03:21:05.755437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.780 [2024-10-09 03:21:05.759934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.780 [2024-10-09 03:21:05.760014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.780 [2024-10-09 03:21:05.760036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.780 [2024-10-09 03:21:05.764624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.780 [2024-10-09 03:21:05.764708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.780 [2024-10-09 03:21:05.764729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.780 [2024-10-09 03:21:05.769328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.780 [2024-10-09 03:21:05.769392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.780 [2024-10-09 03:21:05.769413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.780 [2024-10-09 03:21:05.773839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.780 [2024-10-09 03:21:05.773922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.780 [2024-10-09 03:21:05.773944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.780 [2024-10-09 03:21:05.778477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.780 [2024-10-09 03:21:05.778559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.780 [2024-10-09 03:21:05.778580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.780 [2024-10-09 03:21:05.783039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.780 [2024-10-09 03:21:05.783285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.780 [2024-10-09 03:21:05.783313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.780 [2024-10-09 03:21:05.787770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.780 [2024-10-09 03:21:05.787853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.780 [2024-10-09 03:21:05.787874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.780 [2024-10-09 03:21:05.792372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.780 [2024-10-09 03:21:05.792471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.780 [2024-10-09 03:21:05.792493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.780 [2024-10-09 03:21:05.796925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.780 [2024-10-09 03:21:05.797022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.780 [2024-10-09 03:21:05.797043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.780 [2024-10-09 03:21:05.801619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.780 [2024-10-09 03:21:05.801700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.780 [2024-10-09 03:21:05.801722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.780 [2024-10-09 03:21:05.806229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.780 [2024-10-09 03:21:05.806313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.780 [2024-10-09 03:21:05.806336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.780 [2024-10-09 03:21:05.810819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.811064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.811087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.815504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.815586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.815608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.820007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.820125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.820147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.824554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.824651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.824674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.829141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.829224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.829245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.833699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.833782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.833803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.838359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.838441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.838462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.842891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.842972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.842994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.847538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.847617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.847639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.852112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.852195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.852217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.856694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.856777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.856799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.861275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.861357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.861379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.865798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.866048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.866071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.870729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.870812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.870833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.875364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.875450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.875471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.879801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.879883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.879904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.884351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.884435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.884456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.888882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.888963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.888985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.893464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.893546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.893567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.897989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.898249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.898286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.902870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.903121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.903440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.907843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.908102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.908298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.912630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.912861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.913096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.917505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.917786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.917983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.922870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.923107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.923347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.927958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.928242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.928549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.933085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.933347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.933566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.937873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.937958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.937980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.942509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.942602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.942635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.947181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.947274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.947295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.951693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.951775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.951796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.956297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.956380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.956401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.960856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.960938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.960959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.965427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.965507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.965528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.969938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.970020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.970041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.974608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.974693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.974714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.979337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.979417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.979438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.983866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.983947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.983968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.988445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.988525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.988546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.992920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.993001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.993023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:05.997514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:05.997595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:05.997616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:06.002107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:06.002192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:06.002215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:06.006676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:06.006914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:06.006939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:06.011399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:06.011481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:06.011502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:06.015929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:06.016011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:06.016032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:06.020534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:06.020614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:06.020635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:06.025296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:06.025394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:06.025416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:06.030241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:06.030328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:06.030352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:06.035157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:06.035254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:06.035277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:06.040153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:06.040217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.781 [2024-10-09 03:21:06.040239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.781 [2024-10-09 03:21:06.045292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.781 [2024-10-09 03:21:06.045394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.782 [2024-10-09 03:21:06.045417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.782 [2024-10-09 03:21:06.050327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.782 [2024-10-09 03:21:06.050446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.782 [2024-10-09 03:21:06.050467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.782 [2024-10-09 03:21:06.055286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.782 [2024-10-09 03:21:06.055371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.782 [2024-10-09 03:21:06.055401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.782 [2024-10-09 03:21:06.060091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.782 [2024-10-09 03:21:06.060184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.782 [2024-10-09 03:21:06.060205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.782 [2024-10-09 03:21:06.064838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.782 [2024-10-09 03:21:06.065089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.782 [2024-10-09 03:21:06.065112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.782 [2024-10-09 03:21:06.069811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.782 [2024-10-09 03:21:06.069895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.782 [2024-10-09 03:21:06.069916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.782 [2024-10-09 03:21:06.075286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.782 [2024-10-09 03:21:06.075370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.782 [2024-10-09 03:21:06.075393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.782 [2024-10-09 03:21:06.080346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:22.782 [2024-10-09 03:21:06.080429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.782 [2024-10-09 03:21:06.080451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.041 [2024-10-09 03:21:06.085014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.042 [2024-10-09 03:21:06.085125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.042 [2024-10-09 03:21:06.085147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.042 [2024-10-09 03:21:06.089673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.042 [2024-10-09 03:21:06.089754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.042 [2024-10-09 03:21:06.089775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.042 [2024-10-09 03:21:06.094401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.042 [2024-10-09 03:21:06.094514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.042 [2024-10-09 03:21:06.094535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.042 [2024-10-09 03:21:06.098933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.042 [2024-10-09 03:21:06.099014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.042 [2024-10-09 03:21:06.099036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.042 [2024-10-09 03:21:06.103552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.042 [2024-10-09 03:21:06.103652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.042 [2024-10-09 03:21:06.103676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.042 [2024-10-09 03:21:06.108179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.042 [2024-10-09 03:21:06.108260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.042 [2024-10-09 03:21:06.108281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.042 [2024-10-09 03:21:06.112843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.042 [2024-10-09 03:21:06.112924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.042 [2024-10-09 03:21:06.112946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.042 [2024-10-09 03:21:06.117431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.042 [2024-10-09 03:21:06.117510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.042 [2024-10-09 03:21:06.117531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.042 [2024-10-09 03:21:06.121921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.042 [2024-10-09 03:21:06.122000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.042 [2024-10-09 03:21:06.122022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.042 [2024-10-09 03:21:06.126670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.042 [2024-10-09 03:21:06.126919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.042 [2024-10-09 03:21:06.126941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.042 [2024-10-09 03:21:06.131540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.042 [2024-10-09 03:21:06.131639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.042 [2024-10-09 03:21:06.131661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.042 [2024-10-09 03:21:06.136299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.042 [2024-10-09 03:21:06.136382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.042 [2024-10-09 03:21:06.136404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.042 [2024-10-09 03:21:06.140802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.042 [2024-10-09 03:21:06.140882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.042 [2024-10-09 03:21:06.140903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.042 [2024-10-09 03:21:06.145376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.042 [2024-10-09 03:21:06.145457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.042 [2024-10-09 03:21:06.145478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.042 [2024-10-09 03:21:06.149898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.042 [2024-10-09 03:21:06.149978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.042 [2024-10-09 03:21:06.149999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.042 [2024-10-09 03:21:06.154631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.042 [2024-10-09 03:21:06.154862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.042 [2024-10-09 03:21:06.154891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.042 [2024-10-09 03:21:06.159323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.042 [2024-10-09 03:21:06.159389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.042 [2024-10-09 03:21:06.159411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.042 [2024-10-09 03:21:06.163873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.042 [2024-10-09 03:21:06.163964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.042 [2024-10-09 03:21:06.164000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.042 [2024-10-09 03:21:06.168560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.042 [2024-10-09 03:21:06.168640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.042 [2024-10-09 03:21:06.168661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.042 [2024-10-09 03:21:06.173116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.042 [2024-10-09 03:21:06.173197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.042 [2024-10-09 03:21:06.173218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.042 [2024-10-09 03:21:06.177634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.042 [2024-10-09 03:21:06.177730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.042 [2024-10-09 03:21:06.177751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.042 [2024-10-09 03:21:06.182210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.042 [2024-10-09 03:21:06.182297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.042 [2024-10-09 03:21:06.182320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.042 [2024-10-09 03:21:06.186759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.042 [2024-10-09 03:21:06.186990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.042 [2024-10-09 03:21:06.187019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.042 [2024-10-09 03:21:06.191561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.042 [2024-10-09 03:21:06.191669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.042 [2024-10-09 03:21:06.191691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.042 [2024-10-09 03:21:06.196216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.042 [2024-10-09 03:21:06.196295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.042 [2024-10-09 03:21:06.196317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.042 [2024-10-09 03:21:06.200733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.042 [2024-10-09 03:21:06.200818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.042 [2024-10-09 03:21:06.200839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.042 [2024-10-09 03:21:06.205267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.042 [2024-10-09 03:21:06.205349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.042 [2024-10-09 03:21:06.205371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.042 [2024-10-09 03:21:06.209732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.043 [2024-10-09 03:21:06.209814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.043 [2024-10-09 03:21:06.209835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.043 [2024-10-09 03:21:06.214370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.043 [2024-10-09 03:21:06.214497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.043 [2024-10-09 03:21:06.214518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.043 [2024-10-09 03:21:06.218962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.043 [2024-10-09 03:21:06.219042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.043 [2024-10-09 03:21:06.219063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.043 [2024-10-09 03:21:06.223614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.043 [2024-10-09 03:21:06.223694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.043 [2024-10-09 03:21:06.223716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.043 [2024-10-09 03:21:06.228110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.043 [2024-10-09 03:21:06.228198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.043 [2024-10-09 03:21:06.228219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.043 [2024-10-09 03:21:06.232756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.043 [2024-10-09 03:21:06.232837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.043 [2024-10-09 03:21:06.232860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.043 [2024-10-09 03:21:06.237367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.043 [2024-10-09 03:21:06.237448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.043 [2024-10-09 03:21:06.237470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.043 [2024-10-09 03:21:06.241811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.043 [2024-10-09 03:21:06.241891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.043 [2024-10-09 03:21:06.241912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.043 [2024-10-09 03:21:06.246491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.043 [2024-10-09 03:21:06.246571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.043 [2024-10-09 03:21:06.246593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.043 [2024-10-09 03:21:06.251029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.043 [2024-10-09 03:21:06.251158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.043 [2024-10-09 03:21:06.251180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.043 [2024-10-09 03:21:06.255706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.043 [2024-10-09 03:21:06.255795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.043 [2024-10-09 03:21:06.255817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.043 [2024-10-09 03:21:06.260292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.043 [2024-10-09 03:21:06.260375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.043 [2024-10-09 03:21:06.260396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.043 [2024-10-09 03:21:06.264973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.043 [2024-10-09 03:21:06.265213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.043 [2024-10-09 03:21:06.265236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.043 [2024-10-09 03:21:06.269933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.043 [2024-10-09 03:21:06.270014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.043 [2024-10-09 03:21:06.270036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.043 [2024-10-09 03:21:06.274571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.043 [2024-10-09 03:21:06.274656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.043 [2024-10-09 03:21:06.274677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.043 [2024-10-09 03:21:06.279085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.043 [2024-10-09 03:21:06.279189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.043 [2024-10-09 03:21:06.279211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.043 [2024-10-09 03:21:06.283671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.043 [2024-10-09 03:21:06.283763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.043 [2024-10-09 03:21:06.283785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.043 [2024-10-09 03:21:06.288264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.043 [2024-10-09 03:21:06.288358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.043 [2024-10-09 03:21:06.288379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.043 [2024-10-09 03:21:06.292788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.043 [2024-10-09 03:21:06.293026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.043 [2024-10-09 03:21:06.293053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.043 [2024-10-09 03:21:06.297538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.043 [2024-10-09 03:21:06.297625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.043 [2024-10-09 03:21:06.297647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.043 [2024-10-09 03:21:06.302016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.043 [2024-10-09 03:21:06.302157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.043 [2024-10-09 03:21:06.302180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.043 [2024-10-09 03:21:06.306607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.043 [2024-10-09 03:21:06.306687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.043 [2024-10-09 03:21:06.306708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.043 [2024-10-09 03:21:06.311104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.043 [2024-10-09 03:21:06.311186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.043 [2024-10-09 03:21:06.311206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.043 [2024-10-09 03:21:06.315620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.043 [2024-10-09 03:21:06.315700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.043 [2024-10-09 03:21:06.315722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.043 [2024-10-09 03:21:06.320196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.043 [2024-10-09 03:21:06.320279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.043 [2024-10-09 03:21:06.320300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.043 [2024-10-09 03:21:06.324739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.043 [2024-10-09 03:21:06.324966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.043 [2024-10-09 03:21:06.324995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.043 [2024-10-09 03:21:06.329565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.043 [2024-10-09 03:21:06.329646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.043 [2024-10-09 03:21:06.329667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.043 [2024-10-09 03:21:06.334155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.044 [2024-10-09 03:21:06.334257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.044 [2024-10-09 03:21:06.334281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.044 [2024-10-09 03:21:06.338864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.044 [2024-10-09 03:21:06.338943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.044 [2024-10-09 03:21:06.338966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.304 [2024-10-09 03:21:06.343507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.304 [2024-10-09 03:21:06.343606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.304 [2024-10-09 03:21:06.343634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.305 [2024-10-09 03:21:06.348093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.305 [2024-10-09 03:21:06.349298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:3 6505.00 IOPS, 813.12 MiB/s [2024-10-09T03:21:06.608Z] 2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-09 03:21:06.349487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.305 [2024-10-09 03:21:06.353551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.305 [2024-10-09 03:21:06.353630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-09 03:21:06.353652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.305 [2024-10-09 03:21:06.357595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.305 [2024-10-09 03:21:06.357681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-09 03:21:06.357701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.305 [2024-10-09 03:21:06.361584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.305 [2024-10-09 03:21:06.361685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-09 03:21:06.361705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.305 [2024-10-09 03:21:06.365493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.305 [2024-10-09 03:21:06.365578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-09 03:21:06.365599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.305 [2024-10-09 03:21:06.369645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.305 [2024-10-09 03:21:06.369733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-09 03:21:06.369755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.305 [2024-10-09 03:21:06.373767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.305 [2024-10-09 03:21:06.373968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-09 03:21:06.374011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.305 [2024-10-09 03:21:06.377637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.305 [2024-10-09 03:21:06.377727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-09 03:21:06.377748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.305 [2024-10-09 03:21:06.381670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.305 [2024-10-09 03:21:06.381772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-09 03:21:06.381793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.305 [2024-10-09 03:21:06.385646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.305 [2024-10-09 03:21:06.385734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-09 03:21:06.385756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.305 [2024-10-09 03:21:06.389675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.305 [2024-10-09 03:21:06.389755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-09 03:21:06.389776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.305 [2024-10-09 03:21:06.393651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.305 [2024-10-09 03:21:06.393770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-09 03:21:06.393792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.305 [2024-10-09 03:21:06.397648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.305 [2024-10-09 03:21:06.397772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-09 03:21:06.397798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.305 [2024-10-09 03:21:06.401645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.305 [2024-10-09 03:21:06.401789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-09 03:21:06.401817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.305 [2024-10-09 03:21:06.405768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.305 [2024-10-09 03:21:06.405943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-09 03:21:06.405970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.305 [2024-10-09 03:21:06.409862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.305 [2024-10-09 03:21:06.409942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-09 03:21:06.409963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.305 [2024-10-09 03:21:06.414070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.305 [2024-10-09 03:21:06.414215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-09 03:21:06.414238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.305 [2024-10-09 03:21:06.418502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.305 [2024-10-09 03:21:06.418588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-09 03:21:06.418609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.305 [2024-10-09 03:21:06.422652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.305 [2024-10-09 03:21:06.422765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-09 03:21:06.422786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.305 [2024-10-09 03:21:06.426929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.305 [2024-10-09 03:21:06.427024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-09 03:21:06.427052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.305 [2024-10-09 03:21:06.431241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.305 [2024-10-09 03:21:06.431348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-09 03:21:06.431370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.305 [2024-10-09 03:21:06.435519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.305 [2024-10-09 03:21:06.435747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-09 03:21:06.435790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.305 [2024-10-09 03:21:06.439729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.305 [2024-10-09 03:21:06.439835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-09 03:21:06.439858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.305 [2024-10-09 03:21:06.444110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.305 [2024-10-09 03:21:06.444207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-09 03:21:06.444229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.305 [2024-10-09 03:21:06.448553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.305 [2024-10-09 03:21:06.448806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-09 03:21:06.448828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.305 [2024-10-09 03:21:06.452960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.305 [2024-10-09 03:21:06.453067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-09 03:21:06.453089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.305 [2024-10-09 03:21:06.457414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.305 [2024-10-09 03:21:06.457499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-09 03:21:06.457521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.305 [2024-10-09 03:21:06.461675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.305 [2024-10-09 03:21:06.461779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.305 [2024-10-09 03:21:06.461801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.306 [2024-10-09 03:21:06.465933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.306 [2024-10-09 03:21:06.466063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.306 [2024-10-09 03:21:06.466124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.306 [2024-10-09 03:21:06.469950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.306 [2024-10-09 03:21:06.470166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.306 [2024-10-09 03:21:06.470189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.306 [2024-10-09 03:21:06.474468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.306 [2024-10-09 03:21:06.474554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.306 [2024-10-09 03:21:06.474576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.306 [2024-10-09 03:21:06.478989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.306 [2024-10-09 03:21:06.479090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.306 [2024-10-09 03:21:06.479130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.306 [2024-10-09 03:21:06.483446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.306 [2024-10-09 03:21:06.483563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.306 [2024-10-09 03:21:06.483602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.306 [2024-10-09 03:21:06.487766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.306 [2024-10-09 03:21:06.488135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.306 [2024-10-09 03:21:06.488187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.306 [2024-10-09 03:21:06.492415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.306 [2024-10-09 03:21:06.492529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.306 [2024-10-09 03:21:06.492552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.306 [2024-10-09 03:21:06.496987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.306 [2024-10-09 03:21:06.497115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.306 [2024-10-09 03:21:06.497139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.306 [2024-10-09 03:21:06.501518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.306 [2024-10-09 03:21:06.501601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.306 [2024-10-09 03:21:06.501622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.306 [2024-10-09 03:21:06.505886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.306 [2024-10-09 03:21:06.505972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.306 [2024-10-09 03:21:06.505994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.306 [2024-10-09 03:21:06.510185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.306 [2024-10-09 03:21:06.510259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.306 [2024-10-09 03:21:06.510282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.306 [2024-10-09 03:21:06.514396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.306 [2024-10-09 03:21:06.514596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.306 [2024-10-09 03:21:06.514617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.306 [2024-10-09 03:21:06.518540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.306 [2024-10-09 03:21:06.518635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.306 [2024-10-09 03:21:06.518657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.306 [2024-10-09 03:21:06.522698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.306 [2024-10-09 03:21:06.522808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.306 [2024-10-09 03:21:06.522830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.306 [2024-10-09 03:21:06.526910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.306 [2024-10-09 03:21:06.527010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.306 [2024-10-09 03:21:06.527032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.306 [2024-10-09 03:21:06.531307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.306 [2024-10-09 03:21:06.531401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.306 [2024-10-09 03:21:06.531423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.306 [2024-10-09 03:21:06.535713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.306 [2024-10-09 03:21:06.535969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.306 [2024-10-09 03:21:06.535993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.306 [2024-10-09 03:21:06.540187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.306 [2024-10-09 03:21:06.540394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.306 [2024-10-09 03:21:06.540423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.306 [2024-10-09 03:21:06.544434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.306 [2024-10-09 03:21:06.544549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.306 [2024-10-09 03:21:06.544572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.306 [2024-10-09 03:21:06.548799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.306 [2024-10-09 03:21:06.548894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.306 [2024-10-09 03:21:06.548916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.306 [2024-10-09 03:21:06.553144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.306 [2024-10-09 03:21:06.553255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.306 [2024-10-09 03:21:06.553277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.306 [2024-10-09 03:21:06.557460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.306 [2024-10-09 03:21:06.557574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.306 [2024-10-09 03:21:06.557595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.306 [2024-10-09 03:21:06.561878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.306 [2024-10-09 03:21:06.561985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.306 [2024-10-09 03:21:06.562007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.306 [2024-10-09 03:21:06.566211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.306 [2024-10-09 03:21:06.566388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.306 [2024-10-09 03:21:06.566417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.306 [2024-10-09 03:21:06.570302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.306 [2024-10-09 03:21:06.570399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.306 [2024-10-09 03:21:06.570436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.306 [2024-10-09 03:21:06.574447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.306 [2024-10-09 03:21:06.574532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.306 [2024-10-09 03:21:06.574554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.306 [2024-10-09 03:21:06.578704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.306 [2024-10-09 03:21:06.578798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.306 [2024-10-09 03:21:06.578819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.306 [2024-10-09 03:21:06.583067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.306 [2024-10-09 03:21:06.583183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.306 [2024-10-09 03:21:06.583206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.306 [2024-10-09 03:21:06.587374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.306 [2024-10-09 03:21:06.587460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.307 [2024-10-09 03:21:06.587483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.307 [2024-10-09 03:21:06.591672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.307 [2024-10-09 03:21:06.591829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.307 [2024-10-09 03:21:06.591852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.307 [2024-10-09 03:21:06.596063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.307 [2024-10-09 03:21:06.596283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.307 [2024-10-09 03:21:06.596316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.307 [2024-10-09 03:21:06.600344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.307 [2024-10-09 03:21:06.600413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.307 [2024-10-09 03:21:06.600436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.307 [2024-10-09 03:21:06.604842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.307 [2024-10-09 03:21:06.604980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.307 [2024-10-09 03:21:06.605019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.567 [2024-10-09 03:21:06.609508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.567 [2024-10-09 03:21:06.609620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.567 [2024-10-09 03:21:06.609643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.567 [2024-10-09 03:21:06.613921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.567 [2024-10-09 03:21:06.614021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.567 [2024-10-09 03:21:06.614042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.567 [2024-10-09 03:21:06.618349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.567 [2024-10-09 03:21:06.618463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.567 [2024-10-09 03:21:06.618485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.567 [2024-10-09 03:21:06.622821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.567 [2024-10-09 03:21:06.623078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.567 [2024-10-09 03:21:06.623102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.567 [2024-10-09 03:21:06.627181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.567 [2024-10-09 03:21:06.627436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.567 [2024-10-09 03:21:06.627497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.567 [2024-10-09 03:21:06.631242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.567 [2024-10-09 03:21:06.631335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.567 [2024-10-09 03:21:06.631358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.567 [2024-10-09 03:21:06.635647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.567 [2024-10-09 03:21:06.635740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.567 [2024-10-09 03:21:06.635771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.567 [2024-10-09 03:21:06.639868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.567 [2024-10-09 03:21:06.639950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.567 [2024-10-09 03:21:06.639988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.567 [2024-10-09 03:21:06.644215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.567 [2024-10-09 03:21:06.644305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.567 [2024-10-09 03:21:06.644328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.567 [2024-10-09 03:21:06.648536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.567 [2024-10-09 03:21:06.648626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.567 [2024-10-09 03:21:06.648648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.567 [2024-10-09 03:21:06.653017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.567 [2024-10-09 03:21:06.653275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.567 [2024-10-09 03:21:06.653308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.567 [2024-10-09 03:21:06.657357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.567 [2024-10-09 03:21:06.657599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.567 [2024-10-09 03:21:06.657644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.567 [2024-10-09 03:21:06.661552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.567 [2024-10-09 03:21:06.661685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.567 [2024-10-09 03:21:06.661706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.567 [2024-10-09 03:21:06.665627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.567 [2024-10-09 03:21:06.665732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.567 [2024-10-09 03:21:06.665754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.567 [2024-10-09 03:21:06.669853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.567 [2024-10-09 03:21:06.669970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.567 [2024-10-09 03:21:06.669992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.567 [2024-10-09 03:21:06.674225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.567 [2024-10-09 03:21:06.674367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.567 [2024-10-09 03:21:06.674396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.567 [2024-10-09 03:21:06.678380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.567 [2024-10-09 03:21:06.678572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.567 [2024-10-09 03:21:06.678593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.567 [2024-10-09 03:21:06.682740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.567 [2024-10-09 03:21:06.683095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.567 [2024-10-09 03:21:06.683163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.567 [2024-10-09 03:21:06.687068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.567 [2024-10-09 03:21:06.687330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.567 [2024-10-09 03:21:06.687359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.567 [2024-10-09 03:21:06.691492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.567 [2024-10-09 03:21:06.691590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.567 [2024-10-09 03:21:06.691611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.567 [2024-10-09 03:21:06.695758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.567 [2024-10-09 03:21:06.695843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.567 [2024-10-09 03:21:06.695865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.567 [2024-10-09 03:21:06.699962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.567 [2024-10-09 03:21:06.700164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.567 [2024-10-09 03:21:06.700202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.567 [2024-10-09 03:21:06.704305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.567 [2024-10-09 03:21:06.704474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.567 [2024-10-09 03:21:06.704501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.567 [2024-10-09 03:21:06.708418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.567 [2024-10-09 03:21:06.708501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.567 [2024-10-09 03:21:06.708539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.567 [2024-10-09 03:21:06.712535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.712616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.712637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.716689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.716770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.716791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.720819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.720900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.720921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.725137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.725336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.725364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.729642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.729821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.729848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.734014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.734165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.734190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.738668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.738940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.738964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.743337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.743429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.743452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.748022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.748162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.748186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.752609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.752716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.752737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.757109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.757224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.757248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.761389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.761487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.761510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.765857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.765955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.765978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.770389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.770509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.770532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.774680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.774775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.774797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.778723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.778823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.778845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.782959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.783071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.783093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.786990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.787245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.787296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.790977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.791076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.791100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.795207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.795321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.795343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.799481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.799615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.799638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.803829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.803942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.803964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.808166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.808281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.808304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.812301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.812411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.812434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.816600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.816714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.816736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.820882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.820973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.820996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.825132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.825307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.825329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.829492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.829709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.829732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.833810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.833915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.833936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.838063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.838183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.838205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.842246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.842342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.842364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.846332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.846411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.846467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.850527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.850627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.850649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.854493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.854599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.854620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.858568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.858667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.858690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.862737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.862842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.862864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.568 [2024-10-09 03:21:06.866819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.568 [2024-10-09 03:21:06.866921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.568 [2024-10-09 03:21:06.866944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.829 [2024-10-09 03:21:06.870867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.829 [2024-10-09 03:21:06.870987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.829 [2024-10-09 03:21:06.871008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.829 [2024-10-09 03:21:06.874950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.829 [2024-10-09 03:21:06.875192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.829 [2024-10-09 03:21:06.875239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.829 [2024-10-09 03:21:06.878977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.829 [2024-10-09 03:21:06.879077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.829 [2024-10-09 03:21:06.879101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.829 [2024-10-09 03:21:06.883096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.829 [2024-10-09 03:21:06.883215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.829 [2024-10-09 03:21:06.883236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.829 [2024-10-09 03:21:06.887172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.829 [2024-10-09 03:21:06.887291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.829 [2024-10-09 03:21:06.887313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.829 [2024-10-09 03:21:06.891229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.829 [2024-10-09 03:21:06.891332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.829 [2024-10-09 03:21:06.891354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.829 [2024-10-09 03:21:06.895195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.829 [2024-10-09 03:21:06.895293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.829 [2024-10-09 03:21:06.895316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.829 [2024-10-09 03:21:06.899229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.829 [2024-10-09 03:21:06.899341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.829 [2024-10-09 03:21:06.899363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.829 [2024-10-09 03:21:06.903113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.829 [2024-10-09 03:21:06.903318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.829 [2024-10-09 03:21:06.903349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.829 [2024-10-09 03:21:06.907166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.829 [2024-10-09 03:21:06.907272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.829 [2024-10-09 03:21:06.907294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.829 [2024-10-09 03:21:06.911171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.829 [2024-10-09 03:21:06.911290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.830 [2024-10-09 03:21:06.911312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.830 [2024-10-09 03:21:06.915266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.830 [2024-10-09 03:21:06.915359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.830 [2024-10-09 03:21:06.915382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.830 [2024-10-09 03:21:06.919312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.830 [2024-10-09 03:21:06.919415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.830 [2024-10-09 03:21:06.919438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.830 [2024-10-09 03:21:06.923473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.830 [2024-10-09 03:21:06.923655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.830 [2024-10-09 03:21:06.923689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.830 [2024-10-09 03:21:06.927756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.830 [2024-10-09 03:21:06.928045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.830 [2024-10-09 03:21:06.928095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.830 [2024-10-09 03:21:06.931843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.830 [2024-10-09 03:21:06.931987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.830 [2024-10-09 03:21:06.932009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.830 [2024-10-09 03:21:06.936262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.830 [2024-10-09 03:21:06.936370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.830 [2024-10-09 03:21:06.936392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.830 [2024-10-09 03:21:06.940455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.830 [2024-10-09 03:21:06.940573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.830 [2024-10-09 03:21:06.940595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.830 [2024-10-09 03:21:06.944759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.830 [2024-10-09 03:21:06.944861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.830 [2024-10-09 03:21:06.944883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.830 [2024-10-09 03:21:06.948980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.830 [2024-10-09 03:21:06.949142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.830 [2024-10-09 03:21:06.949165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.830 [2024-10-09 03:21:06.953208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.830 [2024-10-09 03:21:06.953443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.830 [2024-10-09 03:21:06.953495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.830 [2024-10-09 03:21:06.957293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.830 [2024-10-09 03:21:06.957421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.830 [2024-10-09 03:21:06.957442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.830 [2024-10-09 03:21:06.961544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.830 [2024-10-09 03:21:06.961646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.830 [2024-10-09 03:21:06.961668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.830 [2024-10-09 03:21:06.965732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.830 [2024-10-09 03:21:06.965828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.830 [2024-10-09 03:21:06.965849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.830 [2024-10-09 03:21:06.969906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.830 [2024-10-09 03:21:06.970026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.830 [2024-10-09 03:21:06.970048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.830 [2024-10-09 03:21:06.974053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.830 [2024-10-09 03:21:06.974271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.830 [2024-10-09 03:21:06.974313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.830 [2024-10-09 03:21:06.978273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.830 [2024-10-09 03:21:06.978353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.830 [2024-10-09 03:21:06.978376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.830 [2024-10-09 03:21:06.982482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.830 [2024-10-09 03:21:06.982588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.830 [2024-10-09 03:21:06.982627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.830 [2024-10-09 03:21:06.986766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.830 [2024-10-09 03:21:06.986882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.830 [2024-10-09 03:21:06.986904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.830 [2024-10-09 03:21:06.990991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.830 [2024-10-09 03:21:06.991096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.830 [2024-10-09 03:21:06.991118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.830 [2024-10-09 03:21:06.995075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.830 [2024-10-09 03:21:06.995248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.830 [2024-10-09 03:21:06.995270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.830 [2024-10-09 03:21:06.999188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.830 [2024-10-09 03:21:06.999312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.830 [2024-10-09 03:21:06.999334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.830 [2024-10-09 03:21:07.003081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.830 [2024-10-09 03:21:07.003278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.830 [2024-10-09 03:21:07.003309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.830 [2024-10-09 03:21:07.007184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.830 [2024-10-09 03:21:07.007292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.830 [2024-10-09 03:21:07.007314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.830 [2024-10-09 03:21:07.011190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.830 [2024-10-09 03:21:07.011289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.830 [2024-10-09 03:21:07.011311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.830 [2024-10-09 03:21:07.015210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.830 [2024-10-09 03:21:07.015305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.830 [2024-10-09 03:21:07.015327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.830 [2024-10-09 03:21:07.019188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.830 [2024-10-09 03:21:07.019298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.830 [2024-10-09 03:21:07.019320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.830 [2024-10-09 03:21:07.023239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.830 [2024-10-09 03:21:07.023344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.830 [2024-10-09 03:21:07.023366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.830 [2024-10-09 03:21:07.027293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.830 [2024-10-09 03:21:07.027488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.830 [2024-10-09 03:21:07.027519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.830 [2024-10-09 03:21:07.031439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.831 [2024-10-09 03:21:07.031692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.831 [2024-10-09 03:21:07.031743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.831 [2024-10-09 03:21:07.035508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.831 [2024-10-09 03:21:07.035653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.831 [2024-10-09 03:21:07.035675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.831 [2024-10-09 03:21:07.039603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.831 [2024-10-09 03:21:07.039703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.831 [2024-10-09 03:21:07.039727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.831 [2024-10-09 03:21:07.043620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.831 [2024-10-09 03:21:07.043731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.831 [2024-10-09 03:21:07.043752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.831 [2024-10-09 03:21:07.047804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.831 [2024-10-09 03:21:07.047913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.831 [2024-10-09 03:21:07.047935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.831 [2024-10-09 03:21:07.052072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.831 [2024-10-09 03:21:07.052278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.831 [2024-10-09 03:21:07.052310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.831 [2024-10-09 03:21:07.056263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.831 [2024-10-09 03:21:07.056444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.831 [2024-10-09 03:21:07.056475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.831 [2024-10-09 03:21:07.060748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.831 [2024-10-09 03:21:07.060861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.831 [2024-10-09 03:21:07.060884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.831 [2024-10-09 03:21:07.065128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.831 [2024-10-09 03:21:07.065266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.831 [2024-10-09 03:21:07.065289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.831 [2024-10-09 03:21:07.069701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.831 [2024-10-09 03:21:07.069785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.831 [2024-10-09 03:21:07.069809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.831 [2024-10-09 03:21:07.074317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.831 [2024-10-09 03:21:07.074419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.831 [2024-10-09 03:21:07.074456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.831 [2024-10-09 03:21:07.078643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.831 [2024-10-09 03:21:07.078817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.831 [2024-10-09 03:21:07.078858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.831 [2024-10-09 03:21:07.083117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.831 [2024-10-09 03:21:07.083391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.831 [2024-10-09 03:21:07.083426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.831 [2024-10-09 03:21:07.087403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.831 [2024-10-09 03:21:07.087620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.831 [2024-10-09 03:21:07.087654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.831 [2024-10-09 03:21:07.091659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.831 [2024-10-09 03:21:07.091769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.831 [2024-10-09 03:21:07.091791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.831 [2024-10-09 03:21:07.096289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.831 [2024-10-09 03:21:07.096396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.831 [2024-10-09 03:21:07.096420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.831 [2024-10-09 03:21:07.100496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.831 [2024-10-09 03:21:07.100595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.831 [2024-10-09 03:21:07.100617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.831 [2024-10-09 03:21:07.104481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.831 [2024-10-09 03:21:07.104575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.831 [2024-10-09 03:21:07.104597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.831 [2024-10-09 03:21:07.108439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.831 [2024-10-09 03:21:07.108535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.831 [2024-10-09 03:21:07.108556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.831 [2024-10-09 03:21:07.112381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.831 [2024-10-09 03:21:07.112496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.831 [2024-10-09 03:21:07.112518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.831 [2024-10-09 03:21:07.116487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.831 [2024-10-09 03:21:07.116582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.831 [2024-10-09 03:21:07.116603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.831 [2024-10-09 03:21:07.120543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.831 [2024-10-09 03:21:07.120637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.831 [2024-10-09 03:21:07.120659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.831 [2024-10-09 03:21:07.124519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.831 [2024-10-09 03:21:07.124626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.831 [2024-10-09 03:21:07.124648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.831 [2024-10-09 03:21:07.128620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:23.831 [2024-10-09 03:21:07.128811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.831 [2024-10-09 03:21:07.128843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.092 [2024-10-09 03:21:07.132602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.092 [2024-10-09 03:21:07.132849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.092 [2024-10-09 03:21:07.132897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.092 [2024-10-09 03:21:07.136565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.092 [2024-10-09 03:21:07.136690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.092 [2024-10-09 03:21:07.136712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.092 [2024-10-09 03:21:07.140586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.092 [2024-10-09 03:21:07.140701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.092 [2024-10-09 03:21:07.140723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.092 [2024-10-09 03:21:07.144619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.092 [2024-10-09 03:21:07.144721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.092 [2024-10-09 03:21:07.144744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.092 [2024-10-09 03:21:07.148638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.092 [2024-10-09 03:21:07.148801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.092 [2024-10-09 03:21:07.148833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.092 [2024-10-09 03:21:07.152697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.092 [2024-10-09 03:21:07.152794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.092 [2024-10-09 03:21:07.152816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.092 [2024-10-09 03:21:07.156797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.092 [2024-10-09 03:21:07.156889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.092 [2024-10-09 03:21:07.156911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.092 [2024-10-09 03:21:07.160843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.092 [2024-10-09 03:21:07.160938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.092 [2024-10-09 03:21:07.160959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.092 [2024-10-09 03:21:07.164803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.092 [2024-10-09 03:21:07.164924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.092 [2024-10-09 03:21:07.164945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.092 [2024-10-09 03:21:07.168984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.092 [2024-10-09 03:21:07.169098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.092 [2024-10-09 03:21:07.169120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.092 [2024-10-09 03:21:07.172951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.092 [2024-10-09 03:21:07.173127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.093 [2024-10-09 03:21:07.173158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.093 [2024-10-09 03:21:07.176933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.093 [2024-10-09 03:21:07.177023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.093 [2024-10-09 03:21:07.177045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.093 [2024-10-09 03:21:07.181074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.093 [2024-10-09 03:21:07.181203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.093 [2024-10-09 03:21:07.181224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.093 [2024-10-09 03:21:07.185092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.093 [2024-10-09 03:21:07.185191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.093 [2024-10-09 03:21:07.185212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.093 [2024-10-09 03:21:07.189043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.093 [2024-10-09 03:21:07.189153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.093 [2024-10-09 03:21:07.189175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.093 [2024-10-09 03:21:07.193001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.093 [2024-10-09 03:21:07.193144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.093 [2024-10-09 03:21:07.193178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.093 [2024-10-09 03:21:07.197103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.093 [2024-10-09 03:21:07.197214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.093 [2024-10-09 03:21:07.197237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.093 [2024-10-09 03:21:07.201119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.093 [2024-10-09 03:21:07.201249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.093 [2024-10-09 03:21:07.201270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.093 [2024-10-09 03:21:07.205121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.093 [2024-10-09 03:21:07.205217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.093 [2024-10-09 03:21:07.205239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.093 [2024-10-09 03:21:07.209036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.093 [2024-10-09 03:21:07.209162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.093 [2024-10-09 03:21:07.209184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.093 [2024-10-09 03:21:07.212989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.093 [2024-10-09 03:21:07.213196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.093 [2024-10-09 03:21:07.213228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.093 [2024-10-09 03:21:07.216820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.093 [2024-10-09 03:21:07.216927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.093 [2024-10-09 03:21:07.216948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.093 [2024-10-09 03:21:07.220864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.093 [2024-10-09 03:21:07.220970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.093 [2024-10-09 03:21:07.220992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.093 [2024-10-09 03:21:07.224911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.093 [2024-10-09 03:21:07.225008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.093 [2024-10-09 03:21:07.225030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.093 [2024-10-09 03:21:07.228912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.093 [2024-10-09 03:21:07.229007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.093 [2024-10-09 03:21:07.229028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.093 [2024-10-09 03:21:07.232893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.093 [2024-10-09 03:21:07.232988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.093 [2024-10-09 03:21:07.233009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.093 [2024-10-09 03:21:07.237021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.093 [2024-10-09 03:21:07.237151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.093 [2024-10-09 03:21:07.237173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.093 [2024-10-09 03:21:07.241078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.093 [2024-10-09 03:21:07.241246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.093 [2024-10-09 03:21:07.241277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.093 [2024-10-09 03:21:07.245072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.093 [2024-10-09 03:21:07.245180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.093 [2024-10-09 03:21:07.245202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.093 [2024-10-09 03:21:07.249086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.093 [2024-10-09 03:21:07.249183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.093 [2024-10-09 03:21:07.249205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.093 [2024-10-09 03:21:07.253057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.093 [2024-10-09 03:21:07.253224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.093 [2024-10-09 03:21:07.253265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.093 [2024-10-09 03:21:07.257194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.093 [2024-10-09 03:21:07.257289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.093 [2024-10-09 03:21:07.257311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.093 [2024-10-09 03:21:07.261163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.093 [2024-10-09 03:21:07.261259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.093 [2024-10-09 03:21:07.261281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.093 [2024-10-09 03:21:07.265031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.093 [2024-10-09 03:21:07.265219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.093 [2024-10-09 03:21:07.265250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.093 [2024-10-09 03:21:07.269125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.093 [2024-10-09 03:21:07.269228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.093 [2024-10-09 03:21:07.269250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.093 [2024-10-09 03:21:07.273035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.093 [2024-10-09 03:21:07.273155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.093 [2024-10-09 03:21:07.273178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.093 [2024-10-09 03:21:07.277093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.093 [2024-10-09 03:21:07.277188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.093 [2024-10-09 03:21:07.277210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.093 [2024-10-09 03:21:07.280997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.093 [2024-10-09 03:21:07.281121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.093 [2024-10-09 03:21:07.281143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.093 [2024-10-09 03:21:07.284957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.093 [2024-10-09 03:21:07.285055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.093 [2024-10-09 03:21:07.285077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.093 [2024-10-09 03:21:07.288907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.093 [2024-10-09 03:21:07.289021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.094 [2024-10-09 03:21:07.289043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.094 [2024-10-09 03:21:07.292877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.094 [2024-10-09 03:21:07.292971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.094 [2024-10-09 03:21:07.292994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.094 [2024-10-09 03:21:07.296839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.094 [2024-10-09 03:21:07.296947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.094 [2024-10-09 03:21:07.296968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.094 [2024-10-09 03:21:07.300844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.094 [2024-10-09 03:21:07.300989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.094 [2024-10-09 03:21:07.301012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.094 [2024-10-09 03:21:07.304879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.094 [2024-10-09 03:21:07.305058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.094 [2024-10-09 03:21:07.305090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.094 [2024-10-09 03:21:07.308850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.094 [2024-10-09 03:21:07.308957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.094 [2024-10-09 03:21:07.308979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.094 [2024-10-09 03:21:07.312826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.094 [2024-10-09 03:21:07.312933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.094 [2024-10-09 03:21:07.312954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.094 [2024-10-09 03:21:07.316877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.094 [2024-10-09 03:21:07.316993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.094 [2024-10-09 03:21:07.317016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.094 [2024-10-09 03:21:07.320885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.094 [2024-10-09 03:21:07.320991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.094 [2024-10-09 03:21:07.321012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.094 [2024-10-09 03:21:07.324898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.094 [2024-10-09 03:21:07.325070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.094 [2024-10-09 03:21:07.325106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.094 [2024-10-09 03:21:07.328933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.094 [2024-10-09 03:21:07.329041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.094 [2024-10-09 03:21:07.329062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.094 [2024-10-09 03:21:07.332936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.094 [2024-10-09 03:21:07.333080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.094 [2024-10-09 03:21:07.333124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.094 [2024-10-09 03:21:07.337135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.094 [2024-10-09 03:21:07.337355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.094 [2024-10-09 03:21:07.337387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.094 [2024-10-09 03:21:07.341003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.094 [2024-10-09 03:21:07.341179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.094 [2024-10-09 03:21:07.341210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.094 [2024-10-09 03:21:07.344916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.094 [2024-10-09 03:21:07.345018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.094 [2024-10-09 03:21:07.345040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.094 6953.50 IOPS, 869.19 MiB/s [2024-10-09T03:21:07.397Z] [2024-10-09 03:21:07.350195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x759230) with pdu=0x2000198fef90 00:18:24.094 [2024-10-09 03:21:07.350281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.094 [2024-10-09 03:21:07.350304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.094 00:18:24.094 Latency(us) 00:18:24.094 [2024-10-09T03:21:07.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.094 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:24.094 nvme0n1 : 2.00 6951.15 868.89 0.00 0.00 2296.15 1578.82 12213.53 00:18:24.094 [2024-10-09T03:21:07.397Z] =================================================================================================================== 00:18:24.094 [2024-10-09T03:21:07.397Z] Total : 6951.15 868.89 0.00 0.00 2296.15 1578.82 12213.53 00:18:24.094 { 00:18:24.094 "results": [ 00:18:24.094 { 00:18:24.094 "job": "nvme0n1", 00:18:24.094 "core_mask": "0x2", 00:18:24.094 "workload": "randwrite", 00:18:24.094 "status": "finished", 00:18:24.094 "queue_depth": 16, 00:18:24.094 "io_size": 131072, 00:18:24.094 "runtime": 2.003841, 00:18:24.094 "iops": 6951.1503158184705, 00:18:24.094 "mibps": 868.8937894773088, 00:18:24.094 "io_failed": 0, 00:18:24.094 "io_timeout": 0, 00:18:24.094 "avg_latency_us": 2296.1507301313804, 00:18:24.094 "min_latency_us": 1578.8218181818181, 00:18:24.094 "max_latency_us": 12213.527272727273 00:18:24.094 } 00:18:24.094 ], 00:18:24.094 "core_count": 1 00:18:24.094 } 00:18:24.094 03:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:24.094 03:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:24.094 03:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:24.094 03:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:24.094 | .driver_specific 00:18:24.094 | .nvme_error 00:18:24.094 | .status_code 00:18:24.094 | .command_transient_transport_error' 00:18:24.366 03:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 449 > 0 )) 00:18:24.366 03:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80532 00:18:24.366 03:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 80532 ']' 00:18:24.366 03:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 80532 00:18:24.366 03:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:18:24.366 03:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:24.366 03:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80532 00:18:24.366 03:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:24.366 03:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:24.366 03:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80532' 00:18:24.366 killing process with pid 80532 00:18:24.366 Received shutdown signal, test time was about 2.000000 seconds 00:18:24.366 00:18:24.366 Latency(us) 00:18:24.366 [2024-10-09T03:21:07.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.366 [2024-10-09T03:21:07.669Z] =================================================================================================================== 00:18:24.366 [2024-10-09T03:21:07.669Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:24.366 03:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 80532 00:18:24.366 03:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 80532 00:18:24.675 03:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80319 00:18:24.675 03:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 80319 ']' 00:18:24.675 03:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 80319 00:18:24.675 03:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:18:24.675 03:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:24.675 03:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80319 00:18:24.675 03:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:24.675 03:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:24.675 03:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80319' 00:18:24.675 killing process with pid 80319 00:18:24.675 03:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 80319 00:18:24.675 03:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 80319 00:18:24.942 00:18:24.942 real 0m18.851s 00:18:24.942 user 0m36.737s 00:18:24.942 sys 0m4.805s 00:18:24.942 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:24.942 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:24.942 ************************************ 00:18:24.942 END TEST nvmf_digest_error 00:18:24.942 ************************************ 00:18:24.942 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:24.942 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:18:24.942 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:24.942 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:18:25.202 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:25.202 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:18:25.202 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:25.202 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:25.202 rmmod nvme_tcp 00:18:25.202 rmmod nvme_fabrics 00:18:25.202 rmmod nvme_keyring 00:18:25.202 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:25.202 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:18:25.202 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:18:25.202 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 80319 ']' 00:18:25.202 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 80319 00:18:25.202 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 80319 ']' 00:18:25.202 Process with pid 80319 is not found 00:18:25.202 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 80319 00:18:25.202 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (80319) - No such process 00:18:25.202 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 80319 is not found' 00:18:25.202 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:25.202 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:25.202 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:25.202 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:18:25.202 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:25.202 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:18:25.202 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:18:25.202 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:25.202 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:25.202 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:25.202 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:25.202 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:25.202 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:25.202 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:25.202 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:25.202 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:25.202 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:25.202 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:25.202 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:25.202 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:25.202 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:25.462 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:25.462 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:25.462 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.462 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:25.462 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.462 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:18:25.462 00:18:25.462 real 0m38.626s 00:18:25.462 user 1m13.620s 00:18:25.462 sys 0m9.953s 00:18:25.462 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:25.462 ************************************ 00:18:25.462 END TEST nvmf_digest 00:18:25.462 03:21:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:25.462 ************************************ 00:18:25.462 03:21:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:18:25.462 03:21:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:18:25.462 03:21:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:25.462 03:21:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:25.462 03:21:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:25.462 03:21:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.462 ************************************ 00:18:25.462 START TEST nvmf_host_multipath 00:18:25.462 ************************************ 00:18:25.462 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:25.462 * Looking for test storage... 00:18:25.462 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:25.462 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:25.462 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:25.462 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:25.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.722 --rc genhtml_branch_coverage=1 00:18:25.722 --rc genhtml_function_coverage=1 00:18:25.722 --rc genhtml_legend=1 00:18:25.722 --rc geninfo_all_blocks=1 00:18:25.722 --rc geninfo_unexecuted_blocks=1 00:18:25.722 00:18:25.722 ' 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:25.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.722 --rc genhtml_branch_coverage=1 00:18:25.722 --rc genhtml_function_coverage=1 00:18:25.722 --rc genhtml_legend=1 00:18:25.722 --rc geninfo_all_blocks=1 00:18:25.722 --rc geninfo_unexecuted_blocks=1 00:18:25.722 00:18:25.722 ' 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:25.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.722 --rc genhtml_branch_coverage=1 00:18:25.722 --rc genhtml_function_coverage=1 00:18:25.722 --rc genhtml_legend=1 00:18:25.722 --rc geninfo_all_blocks=1 00:18:25.722 --rc geninfo_unexecuted_blocks=1 00:18:25.722 00:18:25.722 ' 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:25.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.722 --rc genhtml_branch_coverage=1 00:18:25.722 --rc genhtml_function_coverage=1 00:18:25.722 --rc genhtml_legend=1 00:18:25.722 --rc geninfo_all_blocks=1 00:18:25.722 --rc geninfo_unexecuted_blocks=1 00:18:25.722 00:18:25.722 ' 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:25.722 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:25.723 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@458 -- # nvmf_veth_init 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:25.723 Cannot find device "nvmf_init_br" 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:25.723 Cannot find device "nvmf_init_br2" 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:25.723 Cannot find device "nvmf_tgt_br" 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:25.723 Cannot find device "nvmf_tgt_br2" 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:25.723 Cannot find device "nvmf_init_br" 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:25.723 Cannot find device "nvmf_init_br2" 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:25.723 Cannot find device "nvmf_tgt_br" 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:25.723 Cannot find device "nvmf_tgt_br2" 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:25.723 Cannot find device "nvmf_br" 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:25.723 Cannot find device "nvmf_init_if" 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:25.723 Cannot find device "nvmf_init_if2" 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:25.723 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:25.723 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:18:25.724 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:25.724 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:25.724 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:18:25.724 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:25.724 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:25.724 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:25.724 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:25.724 03:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:25.724 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:25.724 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:25.983 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:25.983 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:18:25.983 00:18:25.983 --- 10.0.0.3 ping statistics --- 00:18:25.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.983 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:25.983 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:25.983 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:18:25.983 00:18:25.983 --- 10.0.0.4 ping statistics --- 00:18:25.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.983 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:25.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:25.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:18:25.983 00:18:25.983 --- 10.0.0.1 ping statistics --- 00:18:25.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.983 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:25.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:25.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:18:25.983 00:18:25.983 --- 10.0.0.2 ping statistics --- 00:18:25.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.983 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # return 0 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:25.983 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:25.984 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:25.984 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # nvmfpid=80860 00:18:25.984 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # waitforlisten 80860 00:18:25.984 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:25.984 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 80860 ']' 00:18:25.984 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.984 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:25.984 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.984 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:25.984 03:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:25.984 [2024-10-09 03:21:09.272738] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:18:25.984 [2024-10-09 03:21:09.273473] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.243 [2024-10-09 03:21:09.412057] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:26.243 [2024-10-09 03:21:09.496999] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:26.243 [2024-10-09 03:21:09.497060] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:26.243 [2024-10-09 03:21:09.497072] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:26.243 [2024-10-09 03:21:09.497079] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:26.243 [2024-10-09 03:21:09.497085] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:26.243 [2024-10-09 03:21:09.497646] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:26.243 [2024-10-09 03:21:09.497655] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.502 [2024-10-09 03:21:09.553730] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:27.070 03:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:27.070 03:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:18:27.070 03:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:27.070 03:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:27.070 03:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:27.070 03:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:27.070 03:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80860 00:18:27.070 03:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:27.329 [2024-10-09 03:21:10.553041] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:27.329 03:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:27.588 Malloc0 00:18:27.847 03:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:28.106 03:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:28.365 03:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:28.624 [2024-10-09 03:21:11.742995] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:28.624 03:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:28.882 [2024-10-09 03:21:11.979048] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:28.882 03:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80916 00:18:28.882 03:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:28.882 03:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:28.882 03:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80916 /var/tmp/bdevperf.sock 00:18:28.882 03:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 80916 ']' 00:18:28.882 03:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:28.882 03:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:28.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:28.882 03:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:28.882 03:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:28.882 03:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:29.817 03:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:29.818 03:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:18:29.818 03:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:30.077 03:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:30.336 Nvme0n1 00:18:30.336 03:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:30.904 Nvme0n1 00:18:30.904 03:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:18:30.904 03:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:31.841 03:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:31.841 03:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:32.099 03:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:32.358 03:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:32.358 03:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80961 00:18:32.358 03:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:32.359 03:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80860 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:38.950 03:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:38.950 03:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:38.950 03:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:38.950 03:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:38.950 Attaching 4 probes... 00:18:38.950 @path[10.0.0.3, 4421]: 18474 00:18:38.950 @path[10.0.0.3, 4421]: 18754 00:18:38.950 @path[10.0.0.3, 4421]: 18576 00:18:38.950 @path[10.0.0.3, 4421]: 18544 00:18:38.950 @path[10.0.0.3, 4421]: 18256 00:18:38.950 03:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:38.950 03:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:38.950 03:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:38.950 03:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:38.950 03:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:38.950 03:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:38.950 03:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80961 00:18:38.950 03:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:38.950 03:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:38.950 03:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:38.950 03:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:39.209 03:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:39.209 03:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81080 00:18:39.209 03:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80860 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:39.209 03:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:45.777 03:21:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:45.777 03:21:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:45.777 03:21:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:45.777 03:21:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:45.777 Attaching 4 probes... 00:18:45.777 @path[10.0.0.3, 4420]: 16591 00:18:45.777 @path[10.0.0.3, 4420]: 16911 00:18:45.777 @path[10.0.0.3, 4420]: 17052 00:18:45.777 @path[10.0.0.3, 4420]: 17226 00:18:45.777 @path[10.0.0.3, 4420]: 17171 00:18:45.777 03:21:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:45.777 03:21:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:45.777 03:21:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:45.777 03:21:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:45.777 03:21:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:45.777 03:21:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:45.777 03:21:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81080 00:18:45.777 03:21:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:45.777 03:21:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:45.777 03:21:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:45.777 03:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:46.036 03:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:46.036 03:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81193 00:18:46.036 03:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80860 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:46.036 03:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:52.602 03:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:52.602 03:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:52.602 03:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:52.602 03:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:52.602 Attaching 4 probes... 00:18:52.602 @path[10.0.0.3, 4421]: 16386 00:18:52.602 @path[10.0.0.3, 4421]: 19590 00:18:52.602 @path[10.0.0.3, 4421]: 18240 00:18:52.602 @path[10.0.0.3, 4421]: 17213 00:18:52.602 @path[10.0.0.3, 4421]: 17423 00:18:52.602 03:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:52.602 03:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:52.602 03:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:52.602 03:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:52.602 03:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:52.602 03:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:52.602 03:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81193 00:18:52.602 03:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:52.602 03:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:52.602 03:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:52.602 03:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:52.860 03:21:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:52.860 03:21:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80860 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:52.860 03:21:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81305 00:18:52.860 03:21:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:59.429 03:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:59.429 03:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:59.429 03:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:18:59.429 03:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:59.429 Attaching 4 probes... 00:18:59.429 00:18:59.429 00:18:59.429 00:18:59.429 00:18:59.429 00:18:59.429 03:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:59.429 03:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:59.429 03:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:59.429 03:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:18:59.429 03:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:59.429 03:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:59.429 03:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81305 00:18:59.429 03:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:59.429 03:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:59.429 03:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:59.429 03:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:59.687 03:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:59.687 03:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80860 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:59.687 03:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81423 00:18:59.687 03:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:06.252 03:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:06.253 03:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:06.253 03:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:06.253 03:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:06.253 Attaching 4 probes... 00:19:06.253 @path[10.0.0.3, 4421]: 18399 00:19:06.253 @path[10.0.0.3, 4421]: 17486 00:19:06.253 @path[10.0.0.3, 4421]: 17207 00:19:06.253 @path[10.0.0.3, 4421]: 17560 00:19:06.253 @path[10.0.0.3, 4421]: 17316 00:19:06.253 03:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:06.253 03:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:06.253 03:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:06.253 03:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:06.253 03:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:06.253 03:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:06.253 03:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81423 00:19:06.253 03:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:06.253 03:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:06.253 03:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:19:07.189 03:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:19:07.189 03:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81541 00:19:07.189 03:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80860 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:07.189 03:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:13.758 03:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:13.758 03:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:13.759 03:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:13.759 03:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:13.759 Attaching 4 probes... 00:19:13.759 @path[10.0.0.3, 4420]: 15601 00:19:13.759 @path[10.0.0.3, 4420]: 15897 00:19:13.759 @path[10.0.0.3, 4420]: 15915 00:19:13.759 @path[10.0.0.3, 4420]: 15878 00:19:13.759 @path[10.0.0.3, 4420]: 15584 00:19:13.759 03:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:13.759 03:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:13.759 03:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:13.759 03:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:13.759 03:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:13.759 03:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:13.759 03:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81541 00:19:13.759 03:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:13.759 03:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:14.017 [2024-10-09 03:21:57.066895] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:14.018 03:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:14.276 03:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:19:20.843 03:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:19:20.843 03:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81721 00:19:20.843 03:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80860 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:20.843 03:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:26.111 03:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:26.112 03:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:26.370 03:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:26.370 03:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:26.370 Attaching 4 probes... 00:19:26.370 @path[10.0.0.3, 4421]: 17277 00:19:26.370 @path[10.0.0.3, 4421]: 17555 00:19:26.370 @path[10.0.0.3, 4421]: 17579 00:19:26.370 @path[10.0.0.3, 4421]: 17727 00:19:26.370 @path[10.0.0.3, 4421]: 17742 00:19:26.370 03:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:26.370 03:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:26.370 03:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:26.370 03:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:26.370 03:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:26.370 03:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:26.370 03:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81721 00:19:26.370 03:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:26.630 03:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80916 00:19:26.630 03:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 80916 ']' 00:19:26.630 03:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 80916 00:19:26.630 03:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:19:26.630 03:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:26.630 03:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80916 00:19:26.630 03:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:26.630 03:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:26.630 killing process with pid 80916 00:19:26.630 03:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80916' 00:19:26.630 03:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 80916 00:19:26.630 03:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 80916 00:19:26.630 { 00:19:26.630 "results": [ 00:19:26.630 { 00:19:26.630 "job": "Nvme0n1", 00:19:26.630 "core_mask": "0x4", 00:19:26.630 "workload": "verify", 00:19:26.630 "status": "terminated", 00:19:26.630 "verify_range": { 00:19:26.630 "start": 0, 00:19:26.630 "length": 16384 00:19:26.630 }, 00:19:26.630 "queue_depth": 128, 00:19:26.630 "io_size": 4096, 00:19:26.630 "runtime": 55.694838, 00:19:26.630 "iops": 7345.582008874862, 00:19:26.630 "mibps": 28.69367972216743, 00:19:26.630 "io_failed": 0, 00:19:26.630 "io_timeout": 0, 00:19:26.630 "avg_latency_us": 17397.188520012685, 00:19:26.630 "min_latency_us": 1124.5381818181818, 00:19:26.630 "max_latency_us": 7046430.72 00:19:26.630 } 00:19:26.630 ], 00:19:26.630 "core_count": 1 00:19:26.630 } 00:19:26.907 03:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80916 00:19:26.907 03:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:26.907 [2024-10-09 03:21:12.045667] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:19:26.907 [2024-10-09 03:21:12.045775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80916 ] 00:19:26.907 [2024-10-09 03:21:12.177695] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.907 [2024-10-09 03:21:12.285643] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.907 [2024-10-09 03:21:12.342225] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:26.907 Running I/O for 90 seconds... 00:19:26.907 7527.00 IOPS, 29.40 MiB/s [2024-10-09T03:22:10.210Z] 8377.00 IOPS, 32.72 MiB/s [2024-10-09T03:22:10.210Z] 8733.00 IOPS, 34.11 MiB/s [2024-10-09T03:22:10.210Z] 8904.50 IOPS, 34.78 MiB/s [2024-10-09T03:22:10.210Z] 8984.40 IOPS, 35.10 MiB/s [2024-10-09T03:22:10.210Z] 9036.00 IOPS, 35.30 MiB/s [2024-10-09T03:22:10.210Z] 9048.29 IOPS, 35.34 MiB/s [2024-10-09T03:22:10.210Z] 9062.75 IOPS, 35.40 MiB/s [2024-10-09T03:22:10.210Z] [2024-10-09 03:21:22.414930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.907 [2024-10-09 03:21:22.414996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:26.907 [2024-10-09 03:21:22.415082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.907 [2024-10-09 03:21:22.415106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:26.907 [2024-10-09 03:21:22.415129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.907 [2024-10-09 03:21:22.415143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:26.907 [2024-10-09 03:21:22.415163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.908 [2024-10-09 03:21:22.415177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.415197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.908 [2024-10-09 03:21:22.415210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.415230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.908 [2024-10-09 03:21:22.415244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.415263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.908 [2024-10-09 03:21:22.415293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.415313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.908 [2024-10-09 03:21:22.415327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.415357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.908 [2024-10-09 03:21:22.415371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.415418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.908 [2024-10-09 03:21:22.415493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.415517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.908 [2024-10-09 03:21:22.415533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.415555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.908 [2024-10-09 03:21:22.415570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.415592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.908 [2024-10-09 03:21:22.415607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.415628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.908 [2024-10-09 03:21:22.415644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.415665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.908 [2024-10-09 03:21:22.415680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.415702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.908 [2024-10-09 03:21:22.415717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.415739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.908 [2024-10-09 03:21:22.415754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.415810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.908 [2024-10-09 03:21:22.415839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.415859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.908 [2024-10-09 03:21:22.415874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.415894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.908 [2024-10-09 03:21:22.415908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.415927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.908 [2024-10-09 03:21:22.415941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.415975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.908 [2024-10-09 03:21:22.415997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.416027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.908 [2024-10-09 03:21:22.416041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.416061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.908 [2024-10-09 03:21:22.416074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.416251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.908 [2024-10-09 03:21:22.416278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.416301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.908 [2024-10-09 03:21:22.416316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.416335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.908 [2024-10-09 03:21:22.416357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.416377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.908 [2024-10-09 03:21:22.416392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.416411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.908 [2024-10-09 03:21:22.416425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.416445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.908 [2024-10-09 03:21:22.416459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.416478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.908 [2024-10-09 03:21:22.416492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.416512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.908 [2024-10-09 03:21:22.416525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.416545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.908 [2024-10-09 03:21:22.416559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.416578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.908 [2024-10-09 03:21:22.416593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.416641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.908 [2024-10-09 03:21:22.416656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.416677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.908 [2024-10-09 03:21:22.416691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.416711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.908 [2024-10-09 03:21:22.416725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.416745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.908 [2024-10-09 03:21:22.416759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.416795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.908 [2024-10-09 03:21:22.416810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.416830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.908 [2024-10-09 03:21:22.416845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.416866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.908 [2024-10-09 03:21:22.416898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.416936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.908 [2024-10-09 03:21:22.416951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.416973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.908 [2024-10-09 03:21:22.416988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.417011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.908 [2024-10-09 03:21:22.417026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:26.908 [2024-10-09 03:21:22.417049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.909 [2024-10-09 03:21:22.417064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.417085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.909 [2024-10-09 03:21:22.417101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.417146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.909 [2024-10-09 03:21:22.417165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.417187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.909 [2024-10-09 03:21:22.417203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.417225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.909 [2024-10-09 03:21:22.417256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.417292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.909 [2024-10-09 03:21:22.417322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.417343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.909 [2024-10-09 03:21:22.417357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.417377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.909 [2024-10-09 03:21:22.417392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.417412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.909 [2024-10-09 03:21:22.417427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.417447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.909 [2024-10-09 03:21:22.417461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.417481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.909 [2024-10-09 03:21:22.417495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.417515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.909 [2024-10-09 03:21:22.417529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.417549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.909 [2024-10-09 03:21:22.417564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.417585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.909 [2024-10-09 03:21:22.417599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.417619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.909 [2024-10-09 03:21:22.417640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.417661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.909 [2024-10-09 03:21:22.417676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.417696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.909 [2024-10-09 03:21:22.417711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.417731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.909 [2024-10-09 03:21:22.417745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.417765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.909 [2024-10-09 03:21:22.417780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.417800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.909 [2024-10-09 03:21:22.417814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.417853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.909 [2024-10-09 03:21:22.417872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.417893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.909 [2024-10-09 03:21:22.417907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.417927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.909 [2024-10-09 03:21:22.417942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.417962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.909 [2024-10-09 03:21:22.417976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.417997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.909 [2024-10-09 03:21:22.418011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.418031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.909 [2024-10-09 03:21:22.418045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.418121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.909 [2024-10-09 03:21:22.418150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.418175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.909 [2024-10-09 03:21:22.418191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.418213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.909 [2024-10-09 03:21:22.418229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.418251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.909 [2024-10-09 03:21:22.418280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.418302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.909 [2024-10-09 03:21:22.418326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.418348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.909 [2024-10-09 03:21:22.418364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.418386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.909 [2024-10-09 03:21:22.418401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.418423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.909 [2024-10-09 03:21:22.418445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.418478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.909 [2024-10-09 03:21:22.418494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.418516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.909 [2024-10-09 03:21:22.418532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.418554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.909 [2024-10-09 03:21:22.418569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.418591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.909 [2024-10-09 03:21:22.418606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.418670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.909 [2024-10-09 03:21:22.418700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.418727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.909 [2024-10-09 03:21:22.418742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:26.909 [2024-10-09 03:21:22.418762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.910 [2024-10-09 03:21:22.418777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.418798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.910 [2024-10-09 03:21:22.418812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.418832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.910 [2024-10-09 03:21:22.418847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.418867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.910 [2024-10-09 03:21:22.418881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.418901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.910 [2024-10-09 03:21:22.418916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.418936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.910 [2024-10-09 03:21:22.418950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.418971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.910 [2024-10-09 03:21:22.418985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.419005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.910 [2024-10-09 03:21:22.419019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.419039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.910 [2024-10-09 03:21:22.419054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.419074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.910 [2024-10-09 03:21:22.419099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.419123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.910 [2024-10-09 03:21:22.419138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.419165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.910 [2024-10-09 03:21:22.419181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.419201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.910 [2024-10-09 03:21:22.419215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.419235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.910 [2024-10-09 03:21:22.419249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.419270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.910 [2024-10-09 03:21:22.419285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.419305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.910 [2024-10-09 03:21:22.419319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.419339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.910 [2024-10-09 03:21:22.419353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.419374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.910 [2024-10-09 03:21:22.419388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.419408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.910 [2024-10-09 03:21:22.419423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.419442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.910 [2024-10-09 03:21:22.419457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.419477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.910 [2024-10-09 03:21:22.419499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.419519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.910 [2024-10-09 03:21:22.419533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.419553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.910 [2024-10-09 03:21:22.419584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.419604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.910 [2024-10-09 03:21:22.419625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.419663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.910 [2024-10-09 03:21:22.419679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.421087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.910 [2024-10-09 03:21:22.421130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.421159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.910 [2024-10-09 03:21:22.421210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.421233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.910 [2024-10-09 03:21:22.421249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.421272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.910 [2024-10-09 03:21:22.421288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.421310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.910 [2024-10-09 03:21:22.421325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.421347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.910 [2024-10-09 03:21:22.421363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.421384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.910 [2024-10-09 03:21:22.421400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.421423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.910 [2024-10-09 03:21:22.421439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.421639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.910 [2024-10-09 03:21:22.421664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.421688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.910 [2024-10-09 03:21:22.421704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.421725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.910 [2024-10-09 03:21:22.421750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.421772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.910 [2024-10-09 03:21:22.421788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.421808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.910 [2024-10-09 03:21:22.421822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.421842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.910 [2024-10-09 03:21:22.421856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.421876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.910 [2024-10-09 03:21:22.421890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:26.910 [2024-10-09 03:21:22.421910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.910 [2024-10-09 03:21:22.421925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:26.910 8989.67 IOPS, 35.12 MiB/s [2024-10-09T03:22:10.213Z] 8937.10 IOPS, 34.91 MiB/s [2024-10-09T03:22:10.213Z] 8895.55 IOPS, 34.75 MiB/s [2024-10-09T03:22:10.213Z] 8862.92 IOPS, 34.62 MiB/s [2024-10-09T03:22:10.213Z] 8842.69 IOPS, 34.54 MiB/s [2024-10-09T03:22:10.214Z] 8824.79 IOPS, 34.47 MiB/s [2024-10-09T03:22:10.214Z] [2024-10-09 03:21:29.009302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.911 [2024-10-09 03:21:29.009362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.009471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.911 [2024-10-09 03:21:29.009494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.009516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.911 [2024-10-09 03:21:29.009531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.009552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.911 [2024-10-09 03:21:29.009566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.009586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.911 [2024-10-09 03:21:29.009601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.009621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.911 [2024-10-09 03:21:29.009635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.009655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.911 [2024-10-09 03:21:29.009692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.009714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.911 [2024-10-09 03:21:29.009739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.009799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.911 [2024-10-09 03:21:29.009812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.009847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.911 [2024-10-09 03:21:29.009861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.009881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.911 [2024-10-09 03:21:29.009894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.009930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.911 [2024-10-09 03:21:29.009944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.009971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.911 [2024-10-09 03:21:29.009989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.010009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.911 [2024-10-09 03:21:29.010023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.010043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.911 [2024-10-09 03:21:29.010057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.010077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.911 [2024-10-09 03:21:29.010138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.010162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.911 [2024-10-09 03:21:29.010178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.010200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.911 [2024-10-09 03:21:29.010216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.010238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.911 [2024-10-09 03:21:29.010263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.010287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.911 [2024-10-09 03:21:29.010302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.010324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.911 [2024-10-09 03:21:29.010339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.010361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.911 [2024-10-09 03:21:29.010376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.010398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.911 [2024-10-09 03:21:29.010428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.010464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.911 [2024-10-09 03:21:29.010481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.010507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.911 [2024-10-09 03:21:29.010522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.010543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.911 [2024-10-09 03:21:29.010558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.010579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.911 [2024-10-09 03:21:29.010593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.010613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.911 [2024-10-09 03:21:29.010627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.010647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.911 [2024-10-09 03:21:29.010662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.010685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.911 [2024-10-09 03:21:29.010706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.010740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.911 [2024-10-09 03:21:29.010772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.010799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.911 [2024-10-09 03:21:29.010814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.010833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.911 [2024-10-09 03:21:29.010847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.010867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.911 [2024-10-09 03:21:29.010880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.010900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.911 [2024-10-09 03:21:29.010913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:26.911 [2024-10-09 03:21:29.010932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.912 [2024-10-09 03:21:29.010946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.010965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.912 [2024-10-09 03:21:29.010978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.010998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.912 [2024-10-09 03:21:29.011012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.011031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.912 [2024-10-09 03:21:29.011044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.011063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.912 [2024-10-09 03:21:29.011078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.011098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.912 [2024-10-09 03:21:29.011124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.011145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.912 [2024-10-09 03:21:29.011159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.011179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.912 [2024-10-09 03:21:29.011192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.011220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.912 [2024-10-09 03:21:29.011234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.011253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.912 [2024-10-09 03:21:29.011266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.011286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.912 [2024-10-09 03:21:29.011299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.011318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.912 [2024-10-09 03:21:29.011332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.011351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.912 [2024-10-09 03:21:29.011364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.011400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.912 [2024-10-09 03:21:29.011446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.011468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.912 [2024-10-09 03:21:29.011483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.011504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.912 [2024-10-09 03:21:29.011519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.011542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.912 [2024-10-09 03:21:29.011581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.011602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.912 [2024-10-09 03:21:29.011617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.011639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.912 [2024-10-09 03:21:29.011654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.011675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.912 [2024-10-09 03:21:29.011690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.011712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.912 [2024-10-09 03:21:29.011766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.011815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.912 [2024-10-09 03:21:29.011831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.011852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.912 [2024-10-09 03:21:29.011867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.011889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.912 [2024-10-09 03:21:29.011951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.011986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.912 [2024-10-09 03:21:29.011999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.012018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.912 [2024-10-09 03:21:29.012032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.012052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.912 [2024-10-09 03:21:29.012082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.012102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.912 [2024-10-09 03:21:29.012115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.012136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.912 [2024-10-09 03:21:29.012151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.012173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.912 [2024-10-09 03:21:29.012188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.012221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.912 [2024-10-09 03:21:29.012238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.012259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.912 [2024-10-09 03:21:29.012274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.012293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.912 [2024-10-09 03:21:29.012315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.012336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.912 [2024-10-09 03:21:29.012351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.012371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.912 [2024-10-09 03:21:29.012435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:26.912 [2024-10-09 03:21:29.012471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.913 [2024-10-09 03:21:29.012485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.012505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.913 [2024-10-09 03:21:29.012521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.012541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.913 [2024-10-09 03:21:29.012556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.012611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.913 [2024-10-09 03:21:29.012626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.012648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.913 [2024-10-09 03:21:29.012663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.012685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.913 [2024-10-09 03:21:29.012700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.012721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.913 [2024-10-09 03:21:29.012737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.012767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.913 [2024-10-09 03:21:29.012785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.012822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.913 [2024-10-09 03:21:29.012836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.012857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.913 [2024-10-09 03:21:29.012872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.012900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.913 [2024-10-09 03:21:29.012915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.012936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.913 [2024-10-09 03:21:29.012951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.012973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.913 [2024-10-09 03:21:29.012987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.013023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.913 [2024-10-09 03:21:29.013052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.013072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.913 [2024-10-09 03:21:29.013086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.013105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.913 [2024-10-09 03:21:29.013135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.013156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.913 [2024-10-09 03:21:29.013170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.013207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.913 [2024-10-09 03:21:29.013236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.013263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.913 [2024-10-09 03:21:29.013279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.013300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.913 [2024-10-09 03:21:29.013316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.013337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.913 [2024-10-09 03:21:29.013352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.013373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.913 [2024-10-09 03:21:29.013424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.013453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.913 [2024-10-09 03:21:29.013469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.013491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.913 [2024-10-09 03:21:29.013506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.013528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.913 [2024-10-09 03:21:29.013543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.013564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.913 [2024-10-09 03:21:29.013580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.013616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.913 [2024-10-09 03:21:29.013630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.013652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.913 [2024-10-09 03:21:29.013667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.013688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.913 [2024-10-09 03:21:29.013703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.013723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.913 [2024-10-09 03:21:29.013747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.013806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.913 [2024-10-09 03:21:29.013820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.013839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.913 [2024-10-09 03:21:29.013853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.013873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.913 [2024-10-09 03:21:29.013886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.013906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.913 [2024-10-09 03:21:29.013928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.013948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.913 [2024-10-09 03:21:29.013968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.013989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.913 [2024-10-09 03:21:29.014004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.014023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.913 [2024-10-09 03:21:29.014037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.014057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.913 [2024-10-09 03:21:29.014070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.014131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.913 [2024-10-09 03:21:29.014150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:26.913 [2024-10-09 03:21:29.014172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.913 [2024-10-09 03:21:29.014187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:29.014209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.914 [2024-10-09 03:21:29.014224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:29.014976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.914 [2024-10-09 03:21:29.015005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:29.015038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.914 [2024-10-09 03:21:29.015056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:29.015094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.914 [2024-10-09 03:21:29.015110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:29.015156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.914 [2024-10-09 03:21:29.015174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:29.015202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.914 [2024-10-09 03:21:29.015217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:29.015246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.914 [2024-10-09 03:21:29.015289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:29.015347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.914 [2024-10-09 03:21:29.015362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:29.015430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.914 [2024-10-09 03:21:29.015445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:29.015492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.914 [2024-10-09 03:21:29.015518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:29.015547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.914 [2024-10-09 03:21:29.015562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:29.015590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.914 [2024-10-09 03:21:29.015605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:29.015633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.914 [2024-10-09 03:21:29.015648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:29.015676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.914 [2024-10-09 03:21:29.015690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:29.015718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.914 [2024-10-09 03:21:29.015733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:29.015760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.914 [2024-10-09 03:21:29.015801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:29.015842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.914 [2024-10-09 03:21:29.015856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:29.015885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.914 [2024-10-09 03:21:29.015900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.914 8793.27 IOPS, 34.35 MiB/s [2024-10-09T03:22:10.217Z] 8247.75 IOPS, 32.22 MiB/s [2024-10-09T03:22:10.217Z] 8334.82 IOPS, 32.56 MiB/s [2024-10-09T03:22:10.217Z] 8413.11 IOPS, 32.86 MiB/s [2024-10-09T03:22:10.217Z] 8433.89 IOPS, 32.94 MiB/s [2024-10-09T03:22:10.217Z] 8455.00 IOPS, 33.03 MiB/s [2024-10-09T03:22:10.217Z] 8466.10 IOPS, 33.07 MiB/s [2024-10-09T03:22:10.217Z] 8471.82 IOPS, 33.09 MiB/s [2024-10-09T03:22:10.217Z] [2024-10-09 03:21:36.106152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:55648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.914 [2024-10-09 03:21:36.106205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:36.106273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:55656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.914 [2024-10-09 03:21:36.106293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:36.106314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.914 [2024-10-09 03:21:36.106328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:36.106346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:55672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.914 [2024-10-09 03:21:36.106359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:36.106378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:55680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.914 [2024-10-09 03:21:36.106391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:36.106409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:55688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.914 [2024-10-09 03:21:36.106425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:36.106444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:55696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.914 [2024-10-09 03:21:36.106457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:36.106476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:55704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.914 [2024-10-09 03:21:36.106489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:36.106517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:55712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.914 [2024-10-09 03:21:36.106531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:36.106550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:55720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.914 [2024-10-09 03:21:36.106563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:36.106581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:55728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.914 [2024-10-09 03:21:36.106594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:36.106612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.914 [2024-10-09 03:21:36.106625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:36.106643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.914 [2024-10-09 03:21:36.106669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:36.106690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:55752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.914 [2024-10-09 03:21:36.106704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:36.106722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:55760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.914 [2024-10-09 03:21:36.106736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:36.106754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:55768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.914 [2024-10-09 03:21:36.106768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:36.106787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:55200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.914 [2024-10-09 03:21:36.106800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:36.106821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:55208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.914 [2024-10-09 03:21:36.106836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:36.106854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:55216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.914 [2024-10-09 03:21:36.106868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:36.106887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:55224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.914 [2024-10-09 03:21:36.106900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:36.106919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:55232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.914 [2024-10-09 03:21:36.106932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:26.914 [2024-10-09 03:21:36.106951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:55240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.915 [2024-10-09 03:21:36.106964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.106983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:55248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.915 [2024-10-09 03:21:36.106996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.107015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:55256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.915 [2024-10-09 03:21:36.107028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.107284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:55776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.915 [2024-10-09 03:21:36.107319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.107346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:55784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.915 [2024-10-09 03:21:36.107361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.107381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.915 [2024-10-09 03:21:36.107395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.107415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.915 [2024-10-09 03:21:36.107429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.107449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.915 [2024-10-09 03:21:36.107462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.107482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:55816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.915 [2024-10-09 03:21:36.107496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.107516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.915 [2024-10-09 03:21:36.107529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.107549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.915 [2024-10-09 03:21:36.107563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.107582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:55840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.915 [2024-10-09 03:21:36.107596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.107616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:55848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.915 [2024-10-09 03:21:36.107630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.107650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.915 [2024-10-09 03:21:36.107664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.107684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.915 [2024-10-09 03:21:36.107697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.107717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:55872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.915 [2024-10-09 03:21:36.107731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.107759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:55880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.915 [2024-10-09 03:21:36.107773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.107793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.915 [2024-10-09 03:21:36.107806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.107826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:55896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.915 [2024-10-09 03:21:36.107840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.107860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:55264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.915 [2024-10-09 03:21:36.107874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.107894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:55272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.915 [2024-10-09 03:21:36.107907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.107927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:55280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.915 [2024-10-09 03:21:36.107940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.107960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:55288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.915 [2024-10-09 03:21:36.107973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.107993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:55296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.915 [2024-10-09 03:21:36.108006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.108027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:55304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.915 [2024-10-09 03:21:36.108040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.108106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:55312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.915 [2024-10-09 03:21:36.108121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.108142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:55320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.915 [2024-10-09 03:21:36.108156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.108176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.915 [2024-10-09 03:21:36.108190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.108221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:55336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.915 [2024-10-09 03:21:36.108236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.108258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:55344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.915 [2024-10-09 03:21:36.108271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.108292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:55352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.915 [2024-10-09 03:21:36.108305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.108326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:55360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.915 [2024-10-09 03:21:36.108339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.108360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:55368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.915 [2024-10-09 03:21:36.108373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.108394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.915 [2024-10-09 03:21:36.108407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.108428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:55384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.915 [2024-10-09 03:21:36.108442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.108466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:55904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.915 [2024-10-09 03:21:36.108481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.108502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.915 [2024-10-09 03:21:36.108516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.108536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:55920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.915 [2024-10-09 03:21:36.108550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.108571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:55928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.915 [2024-10-09 03:21:36.108584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.108605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:55936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.915 [2024-10-09 03:21:36.108618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.108639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:55944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.915 [2024-10-09 03:21:36.108660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:26.915 [2024-10-09 03:21:36.108681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.915 [2024-10-09 03:21:36.108695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:26.916 [2024-10-09 03:21:36.108716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.916 [2024-10-09 03:21:36.108730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:26.916 [2024-10-09 03:21:36.108751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.916 [2024-10-09 03:21:36.108765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:26.916 [2024-10-09 03:21:36.108786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:55976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.916 [2024-10-09 03:21:36.108800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:26.916 [2024-10-09 03:21:36.108820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:55984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.916 [2024-10-09 03:21:36.108834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:26.916 [2024-10-09 03:21:36.108854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.916 [2024-10-09 03:21:36.108868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:26.916 [2024-10-09 03:21:36.108889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:56000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.916 [2024-10-09 03:21:36.108902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:26.916 [2024-10-09 03:21:36.108923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:56008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.916 [2024-10-09 03:21:36.108936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:26.916 [2024-10-09 03:21:36.108957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:56016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.916 [2024-10-09 03:21:36.108970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:26.916 [2024-10-09 03:21:36.108991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.916 [2024-10-09 03:21:36.109005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:26.916 [2024-10-09 03:21:36.109025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:55392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.916 [2024-10-09 03:21:36.109039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:26.916 [2024-10-09 03:21:36.109059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:55400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.916 [2024-10-09 03:21:36.109095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:26.916 [2024-10-09 03:21:36.109120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:55408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.916 [2024-10-09 03:21:36.109134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:26.916 [2024-10-09 03:21:36.109155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:55416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.916 [2024-10-09 03:21:36.109169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:26.916 [2024-10-09 03:21:36.109190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:55424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.916 [2024-10-09 03:21:36.109203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:26.916 [2024-10-09 03:21:36.109224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.916 [2024-10-09 03:21:36.109237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:26.916 [2024-10-09 03:21:36.109258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:55440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.916 [2024-10-09 03:21:36.109272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:26.916 [2024-10-09 03:21:36.109292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.916 [2024-10-09 03:21:36.109306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:26.916 [2024-10-09 03:21:36.109327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:55456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.916 [2024-10-09 03:21:36.109341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:26.916 [2024-10-09 03:21:36.109361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:55464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.916 [2024-10-09 03:21:36.109375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:26.916 [2024-10-09 03:21:36.109396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:55472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.916 [2024-10-09 03:21:36.109410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:26.916 [2024-10-09 03:21:36.109430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.916 [2024-10-09 03:21:36.109444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:26.916 [2024-10-09 03:21:36.109464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:55488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.916 [2024-10-09 03:21:36.109478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:26.916 [2024-10-09 03:21:36.109498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:55496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.916 [2024-10-09 03:21:36.109511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:26.916 [2024-10-09 03:21:36.109538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:55504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.916 [2024-10-09 03:21:36.109553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:26.916 [2024-10-09 03:21:36.109574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:55512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.916 [2024-10-09 03:21:36.109588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:26.916 [2024-10-09 03:21:36.109613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:56032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.916 [2024-10-09 03:21:36.109628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:26.916 [2024-10-09 03:21:36.109649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:56040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.916 [2024-10-09 03:21:36.109663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:26.916 [2024-10-09 03:21:36.109684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:56048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.916 [2024-10-09 03:21:36.109698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:26.916 [2024-10-09 03:21:36.109718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.916 [2024-10-09 03:21:36.109732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:26.916 [2024-10-09 03:21:36.109752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:56064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.916 [2024-10-09 03:21:36.109766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:26.916 [2024-10-09 03:21:36.109787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:56072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.916 [2024-10-09 03:21:36.109800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:26.916 [2024-10-09 03:21:36.109821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:56080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.916 [2024-10-09 03:21:36.109834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:36.109855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:56088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.917 [2024-10-09 03:21:36.109868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:36.109889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:56096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.917 [2024-10-09 03:21:36.109902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:36.109922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:56104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.917 [2024-10-09 03:21:36.109936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:36.109963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:56112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.917 [2024-10-09 03:21:36.109977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:36.109998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:56120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.917 [2024-10-09 03:21:36.110011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:36.110032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:56128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.917 [2024-10-09 03:21:36.110057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:36.110081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:56136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.917 [2024-10-09 03:21:36.110114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:36.110136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:56144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.917 [2024-10-09 03:21:36.110159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:36.110180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:56152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.917 [2024-10-09 03:21:36.110193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:36.110214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:55520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.917 [2024-10-09 03:21:36.110228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:36.110249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.917 [2024-10-09 03:21:36.110262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:36.110283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:55536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.917 [2024-10-09 03:21:36.110297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:36.110327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:55544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.917 [2024-10-09 03:21:36.110341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:36.110362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:55552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.917 [2024-10-09 03:21:36.110376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:36.110396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:55560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.917 [2024-10-09 03:21:36.110410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:36.110441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:55568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.917 [2024-10-09 03:21:36.110468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:36.110490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:55576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.917 [2024-10-09 03:21:36.110503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:36.110529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:55584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.917 [2024-10-09 03:21:36.110543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:36.110563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:55592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.917 [2024-10-09 03:21:36.110577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:36.110598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:55600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.917 [2024-10-09 03:21:36.110611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:36.110632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:55608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.917 [2024-10-09 03:21:36.110645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:36.110666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:55616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.917 [2024-10-09 03:21:36.110679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:36.110700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:55624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.917 [2024-10-09 03:21:36.110714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:36.110734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:55632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.917 [2024-10-09 03:21:36.110748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:36.110769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:55640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.917 [2024-10-09 03:21:36.110782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:36.110807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:56160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.917 [2024-10-09 03:21:36.110822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:36.110842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:56168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.917 [2024-10-09 03:21:36.110856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:36.110876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:56176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.917 [2024-10-09 03:21:36.110895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:36.110922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.917 [2024-10-09 03:21:36.110937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:36.110957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:56192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.917 [2024-10-09 03:21:36.110970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:36.110991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:56200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.917 [2024-10-09 03:21:36.111004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:36.111025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:56208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.917 [2024-10-09 03:21:36.111039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:36.111072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:56216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.917 [2024-10-09 03:21:36.111087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:26.917 8135.83 IOPS, 31.78 MiB/s [2024-10-09T03:22:10.220Z] 7796.83 IOPS, 30.46 MiB/s [2024-10-09T03:22:10.220Z] 7484.96 IOPS, 29.24 MiB/s [2024-10-09T03:22:10.220Z] 7197.08 IOPS, 28.11 MiB/s [2024-10-09T03:22:10.220Z] 6930.52 IOPS, 27.07 MiB/s [2024-10-09T03:22:10.220Z] 6683.00 IOPS, 26.11 MiB/s [2024-10-09T03:22:10.220Z] 6452.55 IOPS, 25.21 MiB/s [2024-10-09T03:22:10.220Z] 6519.00 IOPS, 25.46 MiB/s [2024-10-09T03:22:10.220Z] 6608.84 IOPS, 25.82 MiB/s [2024-10-09T03:22:10.220Z] 6658.56 IOPS, 26.01 MiB/s [2024-10-09T03:22:10.220Z] 6720.55 IOPS, 26.25 MiB/s [2024-10-09T03:22:10.220Z] 6780.03 IOPS, 26.48 MiB/s [2024-10-09T03:22:10.220Z] 6835.23 IOPS, 26.70 MiB/s [2024-10-09T03:22:10.220Z] [2024-10-09 03:21:49.425329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:106824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.917 [2024-10-09 03:21:49.425414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:49.425474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:106832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.917 [2024-10-09 03:21:49.425498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:49.425519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:106840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.917 [2024-10-09 03:21:49.425533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:49.425552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:106848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.917 [2024-10-09 03:21:49.425582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:26.917 [2024-10-09 03:21:49.425601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:106856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.918 [2024-10-09 03:21:49.425615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.425634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:106864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.918 [2024-10-09 03:21:49.425679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.425702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.918 [2024-10-09 03:21:49.425716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.425735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:106880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.918 [2024-10-09 03:21:49.425749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.425769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:106248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.918 [2024-10-09 03:21:49.425782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.425802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:106256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.918 [2024-10-09 03:21:49.425816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.425835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:106264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.918 [2024-10-09 03:21:49.425848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.425868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:106272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.918 [2024-10-09 03:21:49.425887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.425905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:106280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.918 [2024-10-09 03:21:49.425918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.425937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:106288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.918 [2024-10-09 03:21:49.425965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.425984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:106296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.918 [2024-10-09 03:21:49.425997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.426015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:106304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.918 [2024-10-09 03:21:49.426028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.426046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:106312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.918 [2024-10-09 03:21:49.426059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.426132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:106320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.918 [2024-10-09 03:21:49.426161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.426183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:106328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.918 [2024-10-09 03:21:49.426198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.426218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:106336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.918 [2024-10-09 03:21:49.426232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.426252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:106344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.918 [2024-10-09 03:21:49.426266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.426287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:106352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.918 [2024-10-09 03:21:49.426301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.426321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:106360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.918 [2024-10-09 03:21:49.426335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.426355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.918 [2024-10-09 03:21:49.426370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.426391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:106376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.918 [2024-10-09 03:21:49.426405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.426450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:106384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.918 [2024-10-09 03:21:49.426464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.426483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:106392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.918 [2024-10-09 03:21:49.426497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.426517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:106400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.918 [2024-10-09 03:21:49.426531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.426550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.918 [2024-10-09 03:21:49.426580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.426598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:106416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.918 [2024-10-09 03:21:49.426611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.426636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:106424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.918 [2024-10-09 03:21:49.426651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.426670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.918 [2024-10-09 03:21:49.426683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.426730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:106888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.918 [2024-10-09 03:21:49.426749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.426766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:106896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.918 [2024-10-09 03:21:49.426779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.426793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:106904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.918 [2024-10-09 03:21:49.426806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.426819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:106912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.918 [2024-10-09 03:21:49.426832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.426846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:106920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.918 [2024-10-09 03:21:49.426858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.426872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:106928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.918 [2024-10-09 03:21:49.426884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.426898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:106936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.918 [2024-10-09 03:21:49.426910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.426924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:106944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.918 [2024-10-09 03:21:49.426937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.426951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:106952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.918 [2024-10-09 03:21:49.426963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.426977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:106960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.918 [2024-10-09 03:21:49.426989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.427003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:106968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.918 [2024-10-09 03:21:49.427024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.427039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:106976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.918 [2024-10-09 03:21:49.427051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.427064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:106984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.918 [2024-10-09 03:21:49.427077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.427090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:106992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.918 [2024-10-09 03:21:49.427102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.918 [2024-10-09 03:21:49.427133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:107000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.919 [2024-10-09 03:21:49.427146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.427160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.919 [2024-10-09 03:21:49.427172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.427185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:106440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.919 [2024-10-09 03:21:49.427197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.427213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:106448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.919 [2024-10-09 03:21:49.427225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.427239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:106456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.919 [2024-10-09 03:21:49.427251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.427265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:106464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.919 [2024-10-09 03:21:49.427277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.427290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:106472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.919 [2024-10-09 03:21:49.427303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.427316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:106480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.919 [2024-10-09 03:21:49.427329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.427342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:106488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.919 [2024-10-09 03:21:49.427354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.427375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:106496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.919 [2024-10-09 03:21:49.427388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.427402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.919 [2024-10-09 03:21:49.427414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.427427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:107024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.919 [2024-10-09 03:21:49.427439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.427453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:107032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.919 [2024-10-09 03:21:49.427465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.427479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.919 [2024-10-09 03:21:49.427491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.427505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:107048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.919 [2024-10-09 03:21:49.427517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.427530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:107056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.919 [2024-10-09 03:21:49.427543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.427556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:107064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.919 [2024-10-09 03:21:49.427568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.427582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.919 [2024-10-09 03:21:49.427594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.427607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:106504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.919 [2024-10-09 03:21:49.427620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.427635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:106512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.919 [2024-10-09 03:21:49.427651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.427665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:106520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.919 [2024-10-09 03:21:49.427677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.427691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:106528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.919 [2024-10-09 03:21:49.427708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.427723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:106536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.919 [2024-10-09 03:21:49.427735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.427749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:106544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.919 [2024-10-09 03:21:49.427761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.427775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:106552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.919 [2024-10-09 03:21:49.427787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.427801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:106560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.919 [2024-10-09 03:21:49.427813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.427827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:107080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.919 [2024-10-09 03:21:49.427840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.427853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:107088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.919 [2024-10-09 03:21:49.427866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.427879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.919 [2024-10-09 03:21:49.427891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.427906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.919 [2024-10-09 03:21:49.427918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.427932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.919 [2024-10-09 03:21:49.427944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.427958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:107120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.919 [2024-10-09 03:21:49.427970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.427983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:107128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.919 [2024-10-09 03:21:49.427995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.428009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:107136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.919 [2024-10-09 03:21:49.428023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.428042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:106568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.919 [2024-10-09 03:21:49.428066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.428082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:106576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.919 [2024-10-09 03:21:49.428095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.428110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:106584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.919 [2024-10-09 03:21:49.428122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.919 [2024-10-09 03:21:49.428135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:106592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.919 [2024-10-09 03:21:49.428147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.428161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:106600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.920 [2024-10-09 03:21:49.428173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.428186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:106608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.920 [2024-10-09 03:21:49.428198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.428212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:106616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.920 [2024-10-09 03:21:49.428224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.428237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:106624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.920 [2024-10-09 03:21:49.428249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.428263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:107144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.920 [2024-10-09 03:21:49.428275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.428289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:107152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.920 [2024-10-09 03:21:49.428301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.428315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.920 [2024-10-09 03:21:49.428327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.428340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:107168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.920 [2024-10-09 03:21:49.428352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.428366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.920 [2024-10-09 03:21:49.428384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.428398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:107184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.920 [2024-10-09 03:21:49.428412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.428425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:107192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.920 [2024-10-09 03:21:49.428438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.428451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.920 [2024-10-09 03:21:49.428464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.428478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:106632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.920 [2024-10-09 03:21:49.428490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.428504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:106640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.920 [2024-10-09 03:21:49.428517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.428531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:106648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.920 [2024-10-09 03:21:49.428543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.428557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:106656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.920 [2024-10-09 03:21:49.428569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.428582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.920 [2024-10-09 03:21:49.428594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.428608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.920 [2024-10-09 03:21:49.428620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.428633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:106680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.920 [2024-10-09 03:21:49.428646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.428660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:106688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.920 [2024-10-09 03:21:49.428673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.428687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.920 [2024-10-09 03:21:49.428699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.428713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:106704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.920 [2024-10-09 03:21:49.428737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.428752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:106712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.920 [2024-10-09 03:21:49.428765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.428778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:106720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.920 [2024-10-09 03:21:49.428790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.428804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:106728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.920 [2024-10-09 03:21:49.428816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.428830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:106736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.920 [2024-10-09 03:21:49.428842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.428856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:106744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.920 [2024-10-09 03:21:49.428868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.428881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:106752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.920 [2024-10-09 03:21:49.428893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.428907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:106760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.920 [2024-10-09 03:21:49.428919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.428932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.920 [2024-10-09 03:21:49.428945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.428958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.920 [2024-10-09 03:21:49.428970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.428984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:106784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.920 [2024-10-09 03:21:49.428996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.429009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:106792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.920 [2024-10-09 03:21:49.429021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.429034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.920 [2024-10-09 03:21:49.429056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.429085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:106808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.920 [2024-10-09 03:21:49.429098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.920 [2024-10-09 03:21:49.429111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8edc0 is same with the state(6) to be set 00:19:26.920 [2024-10-09 03:21:49.429165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:26.920 [2024-10-09 03:21:49.429176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:26.920 [2024-10-09 03:21:49.429187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106816 len:8 PRP1 0x0 PRP2 0x0 00:19:26.921 [2024-10-09 03:21:49.429199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.921 [2024-10-09 03:21:49.429213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:26.921 [2024-10-09 03:21:49.429223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:26.921 [2024-10-09 03:21:49.429233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107208 len:8 PRP1 0x0 PRP2 0x0 00:19:26.921 [2024-10-09 03:21:49.429245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.921 [2024-10-09 03:21:49.429257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:26.921 [2024-10-09 03:21:49.429266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:26.921 [2024-10-09 03:21:49.429292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107216 len:8 PRP1 0x0 PRP2 0x0 00:19:26.921 [2024-10-09 03:21:49.429304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.921 [2024-10-09 03:21:49.429317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:26.921 [2024-10-09 03:21:49.429326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:26.921 [2024-10-09 03:21:49.429336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107224 len:8 PRP1 0x0 PRP2 0x0 00:19:26.921 [2024-10-09 03:21:49.429348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.921 [2024-10-09 03:21:49.429360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:26.921 [2024-10-09 03:21:49.429370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:26.921 [2024-10-09 03:21:49.429380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107232 len:8 PRP1 0x0 PRP2 0x0 00:19:26.921 [2024-10-09 03:21:49.429392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.921 [2024-10-09 03:21:49.429404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:26.921 [2024-10-09 03:21:49.429413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:26.921 [2024-10-09 03:21:49.429423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107240 len:8 PRP1 0x0 PRP2 0x0 00:19:26.921 [2024-10-09 03:21:49.429435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.921 [2024-10-09 03:21:49.429447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:26.921 [2024-10-09 03:21:49.429456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:26.921 [2024-10-09 03:21:49.429477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107248 len:8 PRP1 0x0 PRP2 0x0 00:19:26.921 [2024-10-09 03:21:49.429495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.921 [2024-10-09 03:21:49.429508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:26.921 [2024-10-09 03:21:49.429518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:26.921 [2024-10-09 03:21:49.429527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107256 len:8 PRP1 0x0 PRP2 0x0 00:19:26.921 [2024-10-09 03:21:49.429539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.921 [2024-10-09 03:21:49.429574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:26.921 [2024-10-09 03:21:49.429585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:26.921 [2024-10-09 03:21:49.429595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107264 len:8 PRP1 0x0 PRP2 0x0 00:19:26.921 [2024-10-09 03:21:49.429607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.921 [2024-10-09 03:21:49.429686] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f8edc0 was disconnected and freed. reset controller. 00:19:26.921 [2024-10-09 03:21:49.429804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:26.921 [2024-10-09 03:21:49.429828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.921 [2024-10-09 03:21:49.429843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:26.921 [2024-10-09 03:21:49.429856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.921 [2024-10-09 03:21:49.429869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:26.921 [2024-10-09 03:21:49.429881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.921 [2024-10-09 03:21:49.429895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:26.921 [2024-10-09 03:21:49.429907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.921 [2024-10-09 03:21:49.429921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.921 [2024-10-09 03:21:49.429934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.921 [2024-10-09 03:21:49.429953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1df50 is same with the state(6) to be set 00:19:26.921 [2024-10-09 03:21:49.431033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:26.921 [2024-10-09 03:21:49.431085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f1df50 (9): Bad file descriptor 00:19:26.921 [2024-10-09 03:21:49.431427] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:26.921 [2024-10-09 03:21:49.431458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1df50 with addr=10.0.0.3, port=4421 00:19:26.921 [2024-10-09 03:21:49.431474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1df50 is same with the state(6) to be set 00:19:26.921 [2024-10-09 03:21:49.431503] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f1df50 (9): Bad file descriptor 00:19:26.921 [2024-10-09 03:21:49.431543] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:26.921 [2024-10-09 03:21:49.431560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:26.921 [2024-10-09 03:21:49.431573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:26.921 [2024-10-09 03:21:49.431602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:26.921 [2024-10-09 03:21:49.431618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:26.921 6865.61 IOPS, 26.82 MiB/s [2024-10-09T03:22:10.224Z] 6887.62 IOPS, 26.90 MiB/s [2024-10-09T03:22:10.224Z] 6915.84 IOPS, 27.02 MiB/s [2024-10-09T03:22:10.224Z] 6942.82 IOPS, 27.12 MiB/s [2024-10-09T03:22:10.224Z] 6967.65 IOPS, 27.22 MiB/s [2024-10-09T03:22:10.224Z] 6991.66 IOPS, 27.31 MiB/s [2024-10-09T03:22:10.224Z] 7010.71 IOPS, 27.39 MiB/s [2024-10-09T03:22:10.224Z] 7025.16 IOPS, 27.44 MiB/s [2024-10-09T03:22:10.224Z] 7031.68 IOPS, 27.47 MiB/s [2024-10-09T03:22:10.224Z] 7033.29 IOPS, 27.47 MiB/s [2024-10-09T03:22:10.224Z] [2024-10-09 03:21:59.484722] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:26.921 7055.89 IOPS, 27.56 MiB/s [2024-10-09T03:22:10.224Z] 7094.53 IOPS, 27.71 MiB/s [2024-10-09T03:22:10.224Z] 7124.23 IOPS, 27.83 MiB/s [2024-10-09T03:22:10.224Z] 7157.29 IOPS, 27.96 MiB/s [2024-10-09T03:22:10.224Z] 7184.56 IOPS, 28.06 MiB/s [2024-10-09T03:22:10.224Z] 7217.16 IOPS, 28.19 MiB/s [2024-10-09T03:22:10.224Z] 7246.52 IOPS, 28.31 MiB/s [2024-10-09T03:22:10.224Z] 7275.68 IOPS, 28.42 MiB/s [2024-10-09T03:22:10.224Z] 7303.87 IOPS, 28.53 MiB/s [2024-10-09T03:22:10.224Z] 7332.00 IOPS, 28.64 MiB/s [2024-10-09T03:22:10.224Z] Received shutdown signal, test time was about 55.695476 seconds 00:19:26.921 00:19:26.921 Latency(us) 00:19:26.921 [2024-10-09T03:22:10.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.921 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:26.921 Verification LBA range: start 0x0 length 0x4000 00:19:26.921 Nvme0n1 : 55.69 7345.58 28.69 0.00 0.00 17397.19 1124.54 7046430.72 00:19:26.921 [2024-10-09T03:22:10.224Z] =================================================================================================================== 00:19:26.921 [2024-10-09T03:22:10.224Z] Total : 7345.58 28.69 0.00 0.00 17397.19 1124.54 7046430.72 00:19:26.921 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:27.225 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:19:27.225 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:27.225 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:19:27.225 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:27.225 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:19:27.225 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:27.225 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:19:27.225 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:27.225 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:27.225 rmmod nvme_tcp 00:19:27.225 rmmod nvme_fabrics 00:19:27.225 rmmod nvme_keyring 00:19:27.225 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:27.225 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:19:27.225 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:19:27.225 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@515 -- # '[' -n 80860 ']' 00:19:27.225 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # killprocess 80860 00:19:27.225 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 80860 ']' 00:19:27.225 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 80860 00:19:27.225 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:19:27.225 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:27.225 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80860 00:19:27.225 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:27.225 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:27.225 killing process with pid 80860 00:19:27.225 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80860' 00:19:27.225 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 80860 00:19:27.225 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 80860 00:19:27.484 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:27.484 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:19:27.484 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:19:27.484 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:19:27.484 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@789 -- # iptables-save 00:19:27.484 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:19:27.484 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:19:27.484 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:27.484 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:27.484 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:27.484 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:27.484 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:27.484 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:27.484 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:27.484 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:27.484 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:27.484 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:27.484 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:27.742 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:27.742 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:27.742 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:27.742 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:27.743 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:27.743 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.743 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:27.743 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.743 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:19:27.743 00:19:27.743 real 1m2.309s 00:19:27.743 user 2m51.069s 00:19:27.743 sys 0m19.496s 00:19:27.743 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:27.743 03:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:27.743 ************************************ 00:19:27.743 END TEST nvmf_host_multipath 00:19:27.743 ************************************ 00:19:27.743 03:22:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:27.743 03:22:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:27.743 03:22:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:27.743 03:22:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.743 ************************************ 00:19:27.743 START TEST nvmf_timeout 00:19:27.743 ************************************ 00:19:27.743 03:22:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:28.003 * Looking for test storage... 00:19:28.003 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:28.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.003 --rc genhtml_branch_coverage=1 00:19:28.003 --rc genhtml_function_coverage=1 00:19:28.003 --rc genhtml_legend=1 00:19:28.003 --rc geninfo_all_blocks=1 00:19:28.003 --rc geninfo_unexecuted_blocks=1 00:19:28.003 00:19:28.003 ' 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:28.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.003 --rc genhtml_branch_coverage=1 00:19:28.003 --rc genhtml_function_coverage=1 00:19:28.003 --rc genhtml_legend=1 00:19:28.003 --rc geninfo_all_blocks=1 00:19:28.003 --rc geninfo_unexecuted_blocks=1 00:19:28.003 00:19:28.003 ' 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:28.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.003 --rc genhtml_branch_coverage=1 00:19:28.003 --rc genhtml_function_coverage=1 00:19:28.003 --rc genhtml_legend=1 00:19:28.003 --rc geninfo_all_blocks=1 00:19:28.003 --rc geninfo_unexecuted_blocks=1 00:19:28.003 00:19:28.003 ' 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:28.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.003 --rc genhtml_branch_coverage=1 00:19:28.003 --rc genhtml_function_coverage=1 00:19:28.003 --rc genhtml_legend=1 00:19:28.003 --rc geninfo_all_blocks=1 00:19:28.003 --rc geninfo_unexecuted_blocks=1 00:19:28.003 00:19:28.003 ' 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:28.003 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.003 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@458 -- # nvmf_veth_init 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:28.004 Cannot find device "nvmf_init_br" 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:28.004 Cannot find device "nvmf_init_br2" 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:28.004 Cannot find device "nvmf_tgt_br" 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:28.004 Cannot find device "nvmf_tgt_br2" 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:28.004 Cannot find device "nvmf_init_br" 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:28.004 Cannot find device "nvmf_init_br2" 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:28.004 Cannot find device "nvmf_tgt_br" 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:19:28.004 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:28.263 Cannot find device "nvmf_tgt_br2" 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:28.263 Cannot find device "nvmf_br" 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:28.263 Cannot find device "nvmf_init_if" 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:28.263 Cannot find device "nvmf_init_if2" 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:28.263 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:28.263 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:28.263 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:28.263 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:19:28.263 00:19:28.263 --- 10.0.0.3 ping statistics --- 00:19:28.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.263 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:19:28.263 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:28.522 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:28.522 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:19:28.522 00:19:28.522 --- 10.0.0.4 ping statistics --- 00:19:28.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.522 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:19:28.522 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:28.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:28.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:19:28.522 00:19:28.522 --- 10.0.0.1 ping statistics --- 00:19:28.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.522 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:19:28.522 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:28.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:28.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:19:28.522 00:19:28.522 --- 10.0.0.2 ping statistics --- 00:19:28.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.522 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:19:28.522 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:28.522 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # return 0 00:19:28.522 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:28.522 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:28.522 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:28.522 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:28.522 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:28.522 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:28.522 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:28.522 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:19:28.522 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:28.522 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:28.522 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:28.522 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # nvmfpid=82078 00:19:28.522 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:28.522 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # waitforlisten 82078 00:19:28.522 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 82078 ']' 00:19:28.522 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.522 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:28.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.522 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.522 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:28.522 03:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:28.522 [2024-10-09 03:22:11.670400] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:19:28.522 [2024-10-09 03:22:11.670539] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.522 [2024-10-09 03:22:11.813832] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:28.781 [2024-10-09 03:22:11.915229] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.781 [2024-10-09 03:22:11.915297] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.781 [2024-10-09 03:22:11.915311] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:28.781 [2024-10-09 03:22:11.915322] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:28.781 [2024-10-09 03:22:11.915331] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.781 [2024-10-09 03:22:11.916072] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.781 [2024-10-09 03:22:11.916073] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.781 [2024-10-09 03:22:11.973721] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:29.718 03:22:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:29.718 03:22:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:19:29.718 03:22:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:29.718 03:22:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:29.718 03:22:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:29.718 03:22:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.718 03:22:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:29.718 03:22:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:29.718 [2024-10-09 03:22:12.992760] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:29.718 03:22:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:30.286 Malloc0 00:19:30.286 03:22:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:30.544 03:22:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:30.544 03:22:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:30.803 [2024-10-09 03:22:14.015881] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:30.803 03:22:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82127 00:19:30.803 03:22:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:30.803 03:22:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82127 /var/tmp/bdevperf.sock 00:19:30.803 03:22:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 82127 ']' 00:19:30.803 03:22:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:30.804 03:22:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:30.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:30.804 03:22:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:30.804 03:22:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:30.804 03:22:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:30.804 [2024-10-09 03:22:14.091439] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:19:30.804 [2024-10-09 03:22:14.091549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82127 ] 00:19:31.063 [2024-10-09 03:22:14.228651] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.063 [2024-10-09 03:22:14.328150] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.321 [2024-10-09 03:22:14.401654] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:31.889 03:22:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:31.889 03:22:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:19:31.889 03:22:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:32.148 03:22:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:32.716 NVMe0n1 00:19:32.716 03:22:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82152 00:19:32.716 03:22:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:32.716 03:22:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:19:32.716 Running I/O for 10 seconds... 00:19:33.652 03:22:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:33.914 8528.00 IOPS, 33.31 MiB/s [2024-10-09T03:22:17.217Z] [2024-10-09 03:22:17.093510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaec70 is same with the state(6) to be set 00:19:33.914 [2024-10-09 03:22:17.094368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaec70 is same with the state(6) to be set 00:19:33.914 [2024-10-09 03:22:17.094534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaec70 is same with the state(6) to be set 00:19:33.914 [2024-10-09 03:22:17.094668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.914 [2024-10-09 03:22:17.094710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.914 [2024-10-09 03:22:17.094733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.914 [2024-10-09 03:22:17.094743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.914 [2024-10-09 03:22:17.094755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.914 [2024-10-09 03:22:17.094764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.914 [2024-10-09 03:22:17.094773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.914 [2024-10-09 03:22:17.094782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.914 [2024-10-09 03:22:17.094792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.914 [2024-10-09 03:22:17.094801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.914 [2024-10-09 03:22:17.094811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.914 [2024-10-09 03:22:17.094820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.914 [2024-10-09 03:22:17.094829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.914 [2024-10-09 03:22:17.094838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.914 [2024-10-09 03:22:17.094848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.914 [2024-10-09 03:22:17.094856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.914 [2024-10-09 03:22:17.094866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.914 [2024-10-09 03:22:17.094874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.914 [2024-10-09 03:22:17.094883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.914 [2024-10-09 03:22:17.094893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.914 [2024-10-09 03:22:17.094903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.914 [2024-10-09 03:22:17.094911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.914 [2024-10-09 03:22:17.094921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.914 [2024-10-09 03:22:17.094929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.914 [2024-10-09 03:22:17.094939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.914 [2024-10-09 03:22:17.094947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.914 [2024-10-09 03:22:17.094957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.914 [2024-10-09 03:22:17.094965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.914 [2024-10-09 03:22:17.094975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.914 [2024-10-09 03:22:17.094983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.914 [2024-10-09 03:22:17.094993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.914 [2024-10-09 03:22:17.095001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.914 [2024-10-09 03:22:17.095010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.914 [2024-10-09 03:22:17.095018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.914 [2024-10-09 03:22:17.095029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.914 [2024-10-09 03:22:17.095037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.914 [2024-10-09 03:22:17.095061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.914 [2024-10-09 03:22:17.095072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.914 [2024-10-09 03:22:17.095082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.914 [2024-10-09 03:22:17.095090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.914 [2024-10-09 03:22:17.095101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.914 [2024-10-09 03:22:17.095109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.914 [2024-10-09 03:22:17.095120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.914 [2024-10-09 03:22:17.095128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.914 [2024-10-09 03:22:17.095138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.914 [2024-10-09 03:22:17.095147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.914 [2024-10-09 03:22:17.095156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.914 [2024-10-09 03:22:17.095165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.914 [2024-10-09 03:22:17.095174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.914 [2024-10-09 03:22:17.095183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.914 [2024-10-09 03:22:17.095192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.914 [2024-10-09 03:22:17.095201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.914 [2024-10-09 03:22:17.095210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.914 [2024-10-09 03:22:17.095218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.914 [2024-10-09 03:22:17.095228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.914 [2024-10-09 03:22:17.095236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.914 [2024-10-09 03:22:17.095246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.914 [2024-10-09 03:22:17.095254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.914 [2024-10-09 03:22:17.095263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.914 [2024-10-09 03:22:17.095271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.914 [2024-10-09 03:22:17.095280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.914 [2024-10-09 03:22:17.095288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.914 [2024-10-09 03:22:17.095298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.914 [2024-10-09 03:22:17.095306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.914 [2024-10-09 03:22:17.095316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.914 [2024-10-09 03:22:17.095324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.914 [2024-10-09 03:22:17.095334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.915 [2024-10-09 03:22:17.095343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.915 [2024-10-09 03:22:17.095362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.915 [2024-10-09 03:22:17.095380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.915 [2024-10-09 03:22:17.095397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.915 [2024-10-09 03:22:17.095415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.915 [2024-10-09 03:22:17.095433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.915 [2024-10-09 03:22:17.095451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.915 [2024-10-09 03:22:17.095469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.915 [2024-10-09 03:22:17.095489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.915 [2024-10-09 03:22:17.095507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.915 [2024-10-09 03:22:17.095525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.915 [2024-10-09 03:22:17.095543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.915 [2024-10-09 03:22:17.095561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.915 [2024-10-09 03:22:17.095581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.915 [2024-10-09 03:22:17.095599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.915 [2024-10-09 03:22:17.095617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.915 [2024-10-09 03:22:17.095635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.915 [2024-10-09 03:22:17.095653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.915 [2024-10-09 03:22:17.095671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.915 [2024-10-09 03:22:17.095689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.915 [2024-10-09 03:22:17.095709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.915 [2024-10-09 03:22:17.095727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.915 [2024-10-09 03:22:17.095753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.915 [2024-10-09 03:22:17.095771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.915 [2024-10-09 03:22:17.095792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.915 [2024-10-09 03:22:17.095810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.915 [2024-10-09 03:22:17.095828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.915 [2024-10-09 03:22:17.095846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.915 [2024-10-09 03:22:17.095864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.915 [2024-10-09 03:22:17.095882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.915 [2024-10-09 03:22:17.095900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.915 [2024-10-09 03:22:17.095917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.915 [2024-10-09 03:22:17.095938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.915 [2024-10-09 03:22:17.095956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.915 [2024-10-09 03:22:17.095975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.095985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.915 [2024-10-09 03:22:17.095994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.096004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.915 [2024-10-09 03:22:17.096012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.096022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.915 [2024-10-09 03:22:17.096030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.096040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.915 [2024-10-09 03:22:17.096058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.096069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.915 [2024-10-09 03:22:17.096078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.096088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.915 [2024-10-09 03:22:17.096097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.096107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.915 [2024-10-09 03:22:17.096116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.915 [2024-10-09 03:22:17.096126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.916 [2024-10-09 03:22:17.096135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.916 [2024-10-09 03:22:17.096153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.916 [2024-10-09 03:22:17.096172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.916 [2024-10-09 03:22:17.096190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.916 [2024-10-09 03:22:17.096207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.916 [2024-10-09 03:22:17.096225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.916 [2024-10-09 03:22:17.096243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.916 [2024-10-09 03:22:17.096263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.916 [2024-10-09 03:22:17.096298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.916 [2024-10-09 03:22:17.096317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.916 [2024-10-09 03:22:17.096336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.916 [2024-10-09 03:22:17.096354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.916 [2024-10-09 03:22:17.096373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.916 [2024-10-09 03:22:17.096391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.916 [2024-10-09 03:22:17.096409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.916 [2024-10-09 03:22:17.096428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.916 [2024-10-09 03:22:17.096447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.916 [2024-10-09 03:22:17.096466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.916 [2024-10-09 03:22:17.096486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.916 [2024-10-09 03:22:17.096505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.916 [2024-10-09 03:22:17.096523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.916 [2024-10-09 03:22:17.096541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.916 [2024-10-09 03:22:17.096560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.916 [2024-10-09 03:22:17.096578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.916 [2024-10-09 03:22:17.096596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.916 [2024-10-09 03:22:17.096630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.916 [2024-10-09 03:22:17.096649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.916 [2024-10-09 03:22:17.096667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.916 [2024-10-09 03:22:17.096695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.916 [2024-10-09 03:22:17.096720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.916 [2024-10-09 03:22:17.096738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.916 [2024-10-09 03:22:17.096755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.916 [2024-10-09 03:22:17.096773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.916 [2024-10-09 03:22:17.096791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.916 [2024-10-09 03:22:17.096809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.916 [2024-10-09 03:22:17.096826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.916 [2024-10-09 03:22:17.096844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.916 [2024-10-09 03:22:17.096861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.916 [2024-10-09 03:22:17.096879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.916 [2024-10-09 03:22:17.096896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.916 [2024-10-09 03:22:17.096913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.916 [2024-10-09 03:22:17.096946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.916 [2024-10-09 03:22:17.096956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.917 [2024-10-09 03:22:17.096965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.917 [2024-10-09 03:22:17.096974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.917 [2024-10-09 03:22:17.096983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.917 [2024-10-09 03:22:17.096993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.917 [2024-10-09 03:22:17.097008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.917 [2024-10-09 03:22:17.097018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.917 [2024-10-09 03:22:17.097031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.917 [2024-10-09 03:22:17.097042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.917 [2024-10-09 03:22:17.097050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.917 [2024-10-09 03:22:17.097060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.917 [2024-10-09 03:22:17.097069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.917 [2024-10-09 03:22:17.097079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.917 [2024-10-09 03:22:17.097087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.917 [2024-10-09 03:22:17.097106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.917 [2024-10-09 03:22:17.097115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.917 [2024-10-09 03:22:17.097125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.917 [2024-10-09 03:22:17.097134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.917 [2024-10-09 03:22:17.097143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.917 [2024-10-09 03:22:17.097152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.917 [2024-10-09 03:22:17.097161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2186cf0 is same with the state(6) to be set 00:19:33.917 [2024-10-09 03:22:17.097173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:33.917 [2024-10-09 03:22:17.097180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:33.917 [2024-10-09 03:22:17.097187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79192 len:8 PRP1 0x0 PRP2 0x0 00:19:33.917 [2024-10-09 03:22:17.097196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.917 [2024-10-09 03:22:17.097271] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2186cf0 was disconnected and freed. reset controller. 00:19:33.917 [2024-10-09 03:22:17.097369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.917 [2024-10-09 03:22:17.097384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.917 [2024-10-09 03:22:17.097394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.917 [2024-10-09 03:22:17.097403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.917 [2024-10-09 03:22:17.097412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.917 [2024-10-09 03:22:17.097420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.917 [2024-10-09 03:22:17.097429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.917 [2024-10-09 03:22:17.097437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.917 [2024-10-09 03:22:17.097445] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21192e0 is same with the state(6) to be set 00:19:33.917 [2024-10-09 03:22:17.097636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:33.917 [2024-10-09 03:22:17.097668] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21192e0 (9): Bad file descriptor 00:19:33.917 [2024-10-09 03:22:17.097772] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:33.917 [2024-10-09 03:22:17.097798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21192e0 with addr=10.0.0.3, port=4420 00:19:33.917 [2024-10-09 03:22:17.097816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21192e0 is same with the state(6) to be set 00:19:33.917 [2024-10-09 03:22:17.097833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21192e0 (9): Bad file descriptor 00:19:33.917 [2024-10-09 03:22:17.097847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:33.917 [2024-10-09 03:22:17.097856] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:33.917 [2024-10-09 03:22:17.097866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:33.917 [2024-10-09 03:22:17.097884] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:33.917 [2024-10-09 03:22:17.097894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:33.917 03:22:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:19:35.790 4886.00 IOPS, 19.09 MiB/s [2024-10-09T03:22:19.352Z] 3257.33 IOPS, 12.72 MiB/s [2024-10-09T03:22:19.352Z] [2024-10-09 03:22:19.098249] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:36.049 [2024-10-09 03:22:19.098321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21192e0 with addr=10.0.0.3, port=4420 00:19:36.049 [2024-10-09 03:22:19.098353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21192e0 is same with the state(6) to be set 00:19:36.049 [2024-10-09 03:22:19.098381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21192e0 (9): Bad file descriptor 00:19:36.049 [2024-10-09 03:22:19.098432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:36.049 [2024-10-09 03:22:19.098453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:36.049 [2024-10-09 03:22:19.098465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:36.049 [2024-10-09 03:22:19.098493] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:36.049 [2024-10-09 03:22:19.098506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:36.049 03:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:19:36.049 03:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:36.049 03:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:36.308 03:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:36.308 03:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:19:36.308 03:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:36.308 03:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:36.567 03:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:36.567 03:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:19:37.762 2443.00 IOPS, 9.54 MiB/s [2024-10-09T03:22:21.324Z] 1954.40 IOPS, 7.63 MiB/s [2024-10-09T03:22:21.324Z] [2024-10-09 03:22:21.098675] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:38.021 [2024-10-09 03:22:21.098742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21192e0 with addr=10.0.0.3, port=4420 00:19:38.021 [2024-10-09 03:22:21.098758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21192e0 is same with the state(6) to be set 00:19:38.021 [2024-10-09 03:22:21.098796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21192e0 (9): Bad file descriptor 00:19:38.021 [2024-10-09 03:22:21.098814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:38.021 [2024-10-09 03:22:21.098823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:38.021 [2024-10-09 03:22:21.098834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:38.021 [2024-10-09 03:22:21.098859] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:38.021 [2024-10-09 03:22:21.098871] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:39.949 1628.67 IOPS, 6.36 MiB/s [2024-10-09T03:22:23.252Z] 1396.00 IOPS, 5.45 MiB/s [2024-10-09T03:22:23.252Z] [2024-10-09 03:22:23.098960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:39.949 [2024-10-09 03:22:23.099036] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:39.949 [2024-10-09 03:22:23.099067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:39.949 [2024-10-09 03:22:23.099084] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:39.949 [2024-10-09 03:22:23.099113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:40.886 1221.50 IOPS, 4.77 MiB/s 00:19:40.886 Latency(us) 00:19:40.886 [2024-10-09T03:22:24.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.886 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:40.886 Verification LBA range: start 0x0 length 0x4000 00:19:40.886 NVMe0n1 : 8.13 1201.32 4.69 15.74 0.00 105013.82 2874.65 7015926.69 00:19:40.886 [2024-10-09T03:22:24.189Z] =================================================================================================================== 00:19:40.886 [2024-10-09T03:22:24.189Z] Total : 1201.32 4.69 15.74 0.00 105013.82 2874.65 7015926.69 00:19:40.886 { 00:19:40.886 "results": [ 00:19:40.886 { 00:19:40.886 "job": "NVMe0n1", 00:19:40.886 "core_mask": "0x4", 00:19:40.886 "workload": "verify", 00:19:40.886 "status": "finished", 00:19:40.886 "verify_range": { 00:19:40.886 "start": 0, 00:19:40.886 "length": 16384 00:19:40.886 }, 00:19:40.886 "queue_depth": 128, 00:19:40.886 "io_size": 4096, 00:19:40.886 "runtime": 8.134355, 00:19:40.886 "iops": 1201.324505753683, 00:19:40.886 "mibps": 4.6926738506003245, 00:19:40.886 "io_failed": 128, 00:19:40.886 "io_timeout": 0, 00:19:40.886 "avg_latency_us": 105013.81820532598, 00:19:40.886 "min_latency_us": 2874.6472727272726, 00:19:40.886 "max_latency_us": 7015926.69090909 00:19:40.886 } 00:19:40.886 ], 00:19:40.886 "core_count": 1 00:19:40.886 } 00:19:41.454 03:22:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:19:41.454 03:22:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:41.454 03:22:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:41.713 03:22:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:41.713 03:22:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:19:41.713 03:22:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:41.713 03:22:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:41.972 03:22:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:41.972 03:22:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 82152 00:19:41.972 03:22:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82127 00:19:41.972 03:22:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 82127 ']' 00:19:41.972 03:22:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 82127 00:19:41.972 03:22:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:19:41.972 03:22:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:41.972 03:22:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82127 00:19:41.972 03:22:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:41.972 killing process with pid 82127 00:19:41.972 03:22:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:41.972 03:22:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82127' 00:19:41.972 03:22:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 82127 00:19:41.972 Received shutdown signal, test time was about 9.289554 seconds 00:19:41.972 00:19:41.972 Latency(us) 00:19:41.972 [2024-10-09T03:22:25.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.972 [2024-10-09T03:22:25.275Z] =================================================================================================================== 00:19:41.972 [2024-10-09T03:22:25.275Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:41.972 03:22:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 82127 00:19:42.231 03:22:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:42.490 [2024-10-09 03:22:25.764320] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:42.490 03:22:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82275 00:19:42.490 03:22:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82275 /var/tmp/bdevperf.sock 00:19:42.490 03:22:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:42.490 03:22:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 82275 ']' 00:19:42.490 03:22:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:42.490 03:22:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:42.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:42.490 03:22:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:42.490 03:22:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:42.490 03:22:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:42.749 [2024-10-09 03:22:25.826266] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:19:42.749 [2024-10-09 03:22:25.826345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82275 ] 00:19:42.749 [2024-10-09 03:22:25.959484] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.007 [2024-10-09 03:22:26.072311] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:43.007 [2024-10-09 03:22:26.143337] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:43.574 03:22:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:43.574 03:22:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:19:43.574 03:22:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:43.832 03:22:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:44.399 NVMe0n1 00:19:44.399 03:22:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82297 00:19:44.399 03:22:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:44.399 03:22:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:19:44.399 Running I/O for 10 seconds... 00:19:45.340 03:22:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:45.601 7725.00 IOPS, 30.18 MiB/s [2024-10-09T03:22:28.904Z] [2024-10-09 03:22:28.666089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.601 [2024-10-09 03:22:28.666188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.601 [2024-10-09 03:22:28.666209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.601 [2024-10-09 03:22:28.666220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.602 [2024-10-09 03:22:28.666240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.602 [2024-10-09 03:22:28.666262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.602 [2024-10-09 03:22:28.666281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.602 [2024-10-09 03:22:28.666302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.602 [2024-10-09 03:22:28.666320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.602 [2024-10-09 03:22:28.666341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:70248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.602 [2024-10-09 03:22:28.666361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.602 [2024-10-09 03:22:28.666381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:70264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.602 [2024-10-09 03:22:28.666401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:70272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.602 [2024-10-09 03:22:28.666420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:70280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.602 [2024-10-09 03:22:28.666463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:70288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.602 [2024-10-09 03:22:28.666483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.602 [2024-10-09 03:22:28.666502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.602 [2024-10-09 03:22:28.666521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:70312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.602 [2024-10-09 03:22:28.666540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:70320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.602 [2024-10-09 03:22:28.666578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:70328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.602 [2024-10-09 03:22:28.666597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:70336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.602 [2024-10-09 03:22:28.666615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.602 [2024-10-09 03:22:28.666632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:70352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.602 [2024-10-09 03:22:28.666649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:70360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.602 [2024-10-09 03:22:28.666666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.602 [2024-10-09 03:22:28.666683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.602 [2024-10-09 03:22:28.666700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.602 [2024-10-09 03:22:28.666719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.602 [2024-10-09 03:22:28.666737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.602 [2024-10-09 03:22:28.666755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.602 [2024-10-09 03:22:28.666773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.602 [2024-10-09 03:22:28.666791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.602 [2024-10-09 03:22:28.666808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.602 [2024-10-09 03:22:28.666826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.602 [2024-10-09 03:22:28.666842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.602 [2024-10-09 03:22:28.666861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.602 [2024-10-09 03:22:28.666878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:70400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.602 [2024-10-09 03:22:28.666896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:70408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.602 [2024-10-09 03:22:28.666914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:70416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.602 [2024-10-09 03:22:28.666932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:70424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.602 [2024-10-09 03:22:28.666950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:70432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.602 [2024-10-09 03:22:28.666967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:70440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.602 [2024-10-09 03:22:28.666984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.666994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:70448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.602 [2024-10-09 03:22:28.667002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.602 [2024-10-09 03:22:28.667011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:70456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.602 [2024-10-09 03:22:28.667019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:70464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.603 [2024-10-09 03:22:28.667036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.603 [2024-10-09 03:22:28.667092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:70480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.603 [2024-10-09 03:22:28.667112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.603 [2024-10-09 03:22:28.667131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:70496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.603 [2024-10-09 03:22:28.667148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:70504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.603 [2024-10-09 03:22:28.667166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:70512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.603 [2024-10-09 03:22:28.667185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:70520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.603 [2024-10-09 03:22:28.667203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:70528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.603 [2024-10-09 03:22:28.667221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:70536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.603 [2024-10-09 03:22:28.667239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:70544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.603 [2024-10-09 03:22:28.667256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:70552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.603 [2024-10-09 03:22:28.667274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.603 [2024-10-09 03:22:28.667291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.603 [2024-10-09 03:22:28.667309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.603 [2024-10-09 03:22:28.667327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.603 [2024-10-09 03:22:28.667345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.603 [2024-10-09 03:22:28.667365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.603 [2024-10-09 03:22:28.667383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.603 [2024-10-09 03:22:28.667401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.603 [2024-10-09 03:22:28.667418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.603 [2024-10-09 03:22:28.667435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.603 [2024-10-09 03:22:28.667453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.603 [2024-10-09 03:22:28.667471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.603 [2024-10-09 03:22:28.667488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.603 [2024-10-09 03:22:28.667507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.603 [2024-10-09 03:22:28.667525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.603 [2024-10-09 03:22:28.667543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.603 [2024-10-09 03:22:28.667560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.603 [2024-10-09 03:22:28.667579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.603 [2024-10-09 03:22:28.667598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.603 [2024-10-09 03:22:28.667616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.603 [2024-10-09 03:22:28.667634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.603 [2024-10-09 03:22:28.667651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:70600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.603 [2024-10-09 03:22:28.667668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.603 [2024-10-09 03:22:28.667685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:70616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.603 [2024-10-09 03:22:28.667703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.603 [2024-10-09 03:22:28.667720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.603 [2024-10-09 03:22:28.667738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:70640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.603 [2024-10-09 03:22:28.667757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.603 [2024-10-09 03:22:28.667767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.603 [2024-10-09 03:22:28.667777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.604 [2024-10-09 03:22:28.667787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.604 [2024-10-09 03:22:28.667795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.604 [2024-10-09 03:22:28.667804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.604 [2024-10-09 03:22:28.667813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.604 [2024-10-09 03:22:28.667823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.604 [2024-10-09 03:22:28.667831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.604 [2024-10-09 03:22:28.667841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.604 [2024-10-09 03:22:28.667849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.604 [2024-10-09 03:22:28.667859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.604 [2024-10-09 03:22:28.667868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.604 [2024-10-09 03:22:28.667878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.604 [2024-10-09 03:22:28.667887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.604 [2024-10-09 03:22:28.667897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.604 [2024-10-09 03:22:28.667906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.604 [2024-10-09 03:22:28.667915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.604 [2024-10-09 03:22:28.667924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.604 [2024-10-09 03:22:28.667933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.604 [2024-10-09 03:22:28.667941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.604 [2024-10-09 03:22:28.667951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.604 [2024-10-09 03:22:28.667959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.604 [2024-10-09 03:22:28.667969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.604 [2024-10-09 03:22:28.667977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.604 [2024-10-09 03:22:28.667987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.604 [2024-10-09 03:22:28.667995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.604 [2024-10-09 03:22:28.668005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.604 [2024-10-09 03:22:28.668013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.604 [2024-10-09 03:22:28.668023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:70696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.604 [2024-10-09 03:22:28.668031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.604 [2024-10-09 03:22:28.668041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.604 [2024-10-09 03:22:28.668061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.604 [2024-10-09 03:22:28.668073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:70712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.604 [2024-10-09 03:22:28.668081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.604 [2024-10-09 03:22:28.668091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.604 [2024-10-09 03:22:28.668099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.604 [2024-10-09 03:22:28.668109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:70728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.604 [2024-10-09 03:22:28.668117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.604 [2024-10-09 03:22:28.668127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.604 [2024-10-09 03:22:28.668147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.604 [2024-10-09 03:22:28.668157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:70744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.604 [2024-10-09 03:22:28.668165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.604 [2024-10-09 03:22:28.668174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa13cf0 is same with the state(6) to be set 00:19:45.604 [2024-10-09 03:22:28.668185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.604 [2024-10-09 03:22:28.668192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.604 [2024-10-09 03:22:28.668200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70752 len:8 PRP1 0x0 PRP2 0x0 00:19:45.604 [2024-10-09 03:22:28.668209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.604 [2024-10-09 03:22:28.668218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.604 [2024-10-09 03:22:28.668225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.604 [2024-10-09 03:22:28.668232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71080 len:8 PRP1 0x0 PRP2 0x0 00:19:45.604 [2024-10-09 03:22:28.668240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.604 [2024-10-09 03:22:28.668248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.604 [2024-10-09 03:22:28.668255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.604 [2024-10-09 03:22:28.668262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71088 len:8 PRP1 0x0 PRP2 0x0 00:19:45.604 [2024-10-09 03:22:28.668270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.604 [2024-10-09 03:22:28.668278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.604 [2024-10-09 03:22:28.668284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.604 [2024-10-09 03:22:28.668292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71096 len:8 PRP1 0x0 PRP2 0x0 00:19:45.604 [2024-10-09 03:22:28.668299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.604 [2024-10-09 03:22:28.668307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.604 [2024-10-09 03:22:28.668313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.604 [2024-10-09 03:22:28.668320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71104 len:8 PRP1 0x0 PRP2 0x0 00:19:45.604 [2024-10-09 03:22:28.668328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.604 [2024-10-09 03:22:28.668335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.604 [2024-10-09 03:22:28.668341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.604 [2024-10-09 03:22:28.668347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71112 len:8 PRP1 0x0 PRP2 0x0 00:19:45.604 [2024-10-09 03:22:28.668355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.604 [2024-10-09 03:22:28.668363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.604 [2024-10-09 03:22:28.668369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.604 [2024-10-09 03:22:28.668376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71120 len:8 PRP1 0x0 PRP2 0x0 00:19:45.604 [2024-10-09 03:22:28.668384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.604 [2024-10-09 03:22:28.668398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.604 [2024-10-09 03:22:28.668405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.604 [2024-10-09 03:22:28.668412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71128 len:8 PRP1 0x0 PRP2 0x0 00:19:45.604 [2024-10-09 03:22:28.668420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.604 [2024-10-09 03:22:28.668429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.604 [2024-10-09 03:22:28.668435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.604 [2024-10-09 03:22:28.668443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71136 len:8 PRP1 0x0 PRP2 0x0 00:19:45.604 [2024-10-09 03:22:28.668451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.604 [2024-10-09 03:22:28.668459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.604 [2024-10-09 03:22:28.668465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.604 [2024-10-09 03:22:28.668472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71144 len:8 PRP1 0x0 PRP2 0x0 00:19:45.604 [2024-10-09 03:22:28.668480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.604 [2024-10-09 03:22:28.668488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.604 [2024-10-09 03:22:28.668494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.604 [2024-10-09 03:22:28.668501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71152 len:8 PRP1 0x0 PRP2 0x0 00:19:45.604 [2024-10-09 03:22:28.668509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.604 [2024-10-09 03:22:28.668517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.604 [2024-10-09 03:22:28.668524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.604 [2024-10-09 03:22:28.668530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71160 len:8 PRP1 0x0 PRP2 0x0 00:19:45.604 [2024-10-09 03:22:28.668538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.604 [2024-10-09 03:22:28.668546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.605 [2024-10-09 03:22:28.668553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.605 [2024-10-09 03:22:28.668559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71168 len:8 PRP1 0x0 PRP2 0x0 00:19:45.605 [2024-10-09 03:22:28.668567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.605 [2024-10-09 03:22:28.668575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.605 [2024-10-09 03:22:28.668581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.605 [2024-10-09 03:22:28.668587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71176 len:8 PRP1 0x0 PRP2 0x0 00:19:45.605 [2024-10-09 03:22:28.668594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.605 [2024-10-09 03:22:28.668603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.605 [2024-10-09 03:22:28.668609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.605 [2024-10-09 03:22:28.668616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71184 len:8 PRP1 0x0 PRP2 0x0 00:19:45.605 [2024-10-09 03:22:28.668623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.605 [2024-10-09 03:22:28.668637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.605 [2024-10-09 03:22:28.668644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.605 [2024-10-09 03:22:28.668651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71192 len:8 PRP1 0x0 PRP2 0x0 00:19:45.605 [2024-10-09 03:22:28.668658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.605 [2024-10-09 03:22:28.668667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.605 [2024-10-09 03:22:28.668673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.605 [2024-10-09 03:22:28.668681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71200 len:8 PRP1 0x0 PRP2 0x0 00:19:45.605 [2024-10-09 03:22:28.668689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.605 [2024-10-09 03:22:28.668697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.605 [2024-10-09 03:22:28.668703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.605 [2024-10-09 03:22:28.668710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71208 len:8 PRP1 0x0 PRP2 0x0 00:19:45.605 [2024-10-09 03:22:28.668717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.605 [2024-10-09 03:22:28.668726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.605 [2024-10-09 03:22:28.668732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.605 [2024-10-09 03:22:28.668738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71216 len:8 PRP1 0x0 PRP2 0x0 00:19:45.605 [2024-10-09 03:22:28.668745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.605 [2024-10-09 03:22:28.668753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.605 [2024-10-09 03:22:28.668760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.605 [2024-10-09 03:22:28.668766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71224 len:8 PRP1 0x0 PRP2 0x0 00:19:45.605 [2024-10-09 03:22:28.668774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.605 [2024-10-09 03:22:28.668782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.605 [2024-10-09 03:22:28.668788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.605 [2024-10-09 03:22:28.668795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71232 len:8 PRP1 0x0 PRP2 0x0 00:19:45.605 [2024-10-09 03:22:28.668803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.605 [2024-10-09 03:22:28.668811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.605 [2024-10-09 03:22:28.668817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.605 [2024-10-09 03:22:28.668823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71240 len:8 PRP1 0x0 PRP2 0x0 00:19:45.605 [2024-10-09 03:22:28.668831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.605 [2024-10-09 03:22:28.668839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.605 [2024-10-09 03:22:28.668845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.605 [2024-10-09 03:22:28.668852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71248 len:8 PRP1 0x0 PRP2 0x0 00:19:45.605 [2024-10-09 03:22:28.668859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.605 [2024-10-09 03:22:28.668872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.605 [2024-10-09 03:22:28.683179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.605 [2024-10-09 03:22:28.683206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71256 len:8 PRP1 0x0 PRP2 0x0 00:19:45.605 [2024-10-09 03:22:28.683217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.605 [2024-10-09 03:22:28.683230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.605 [2024-10-09 03:22:28.683237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.605 [2024-10-09 03:22:28.683244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71264 len:8 PRP1 0x0 PRP2 0x0 00:19:45.605 [2024-10-09 03:22:28.683252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.605 [2024-10-09 03:22:28.683311] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa13cf0 was disconnected and freed. reset controller. 00:19:45.605 [2024-10-09 03:22:28.683423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:45.605 [2024-10-09 03:22:28.683439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.605 [2024-10-09 03:22:28.683457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:45.605 [2024-10-09 03:22:28.683465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.605 [2024-10-09 03:22:28.683474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:45.605 [2024-10-09 03:22:28.683482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.605 [2024-10-09 03:22:28.683490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:45.605 [2024-10-09 03:22:28.683499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.605 [2024-10-09 03:22:28.683506] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a62e0 is same with the state(6) to be set 00:19:45.605 [2024-10-09 03:22:28.683678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:45.605 [2024-10-09 03:22:28.683700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a62e0 (9): Bad file descriptor 00:19:45.605 [2024-10-09 03:22:28.683790] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:45.605 [2024-10-09 03:22:28.683811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a62e0 with addr=10.0.0.3, port=4420 00:19:45.605 [2024-10-09 03:22:28.683821] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a62e0 is same with the state(6) to be set 00:19:45.605 [2024-10-09 03:22:28.683837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a62e0 (9): Bad file descriptor 00:19:45.605 [2024-10-09 03:22:28.683850] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:45.605 [2024-10-09 03:22:28.683859] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:45.605 [2024-10-09 03:22:28.683868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:45.605 [2024-10-09 03:22:28.683885] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:45.605 [2024-10-09 03:22:28.683895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:45.605 03:22:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:19:46.541 4390.50 IOPS, 17.15 MiB/s [2024-10-09T03:22:29.844Z] [2024-10-09 03:22:29.684048] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:46.541 [2024-10-09 03:22:29.684127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a62e0 with addr=10.0.0.3, port=4420 00:19:46.541 [2024-10-09 03:22:29.684145] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a62e0 is same with the state(6) to be set 00:19:46.541 [2024-10-09 03:22:29.684172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a62e0 (9): Bad file descriptor 00:19:46.541 [2024-10-09 03:22:29.684192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:46.541 [2024-10-09 03:22:29.684218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:46.541 [2024-10-09 03:22:29.684244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:46.541 [2024-10-09 03:22:29.684268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:46.541 [2024-10-09 03:22:29.684279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:46.541 03:22:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:46.799 [2024-10-09 03:22:29.970351] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:46.799 03:22:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 82297 00:19:47.625 2927.00 IOPS, 11.43 MiB/s [2024-10-09T03:22:30.928Z] [2024-10-09 03:22:30.700706] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:49.498 2195.25 IOPS, 8.58 MiB/s [2024-10-09T03:22:33.737Z] 3359.60 IOPS, 13.12 MiB/s [2024-10-09T03:22:34.720Z] 4339.67 IOPS, 16.95 MiB/s [2024-10-09T03:22:35.663Z] 5044.86 IOPS, 19.71 MiB/s [2024-10-09T03:22:36.596Z] 5571.75 IOPS, 21.76 MiB/s [2024-10-09T03:22:37.973Z] 5978.44 IOPS, 23.35 MiB/s [2024-10-09T03:22:37.973Z] 6302.20 IOPS, 24.62 MiB/s 00:19:54.671 Latency(us) 00:19:54.671 [2024-10-09T03:22:37.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.671 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:54.671 Verification LBA range: start 0x0 length 0x4000 00:19:54.671 NVMe0n1 : 10.01 6309.51 24.65 0.00 0.00 20252.65 1206.46 3035150.89 00:19:54.671 [2024-10-09T03:22:37.974Z] =================================================================================================================== 00:19:54.671 [2024-10-09T03:22:37.974Z] Total : 6309.51 24.65 0.00 0.00 20252.65 1206.46 3035150.89 00:19:54.671 { 00:19:54.671 "results": [ 00:19:54.671 { 00:19:54.671 "job": "NVMe0n1", 00:19:54.671 "core_mask": "0x4", 00:19:54.671 "workload": "verify", 00:19:54.671 "status": "finished", 00:19:54.671 "verify_range": { 00:19:54.671 "start": 0, 00:19:54.671 "length": 16384 00:19:54.671 }, 00:19:54.671 "queue_depth": 128, 00:19:54.671 "io_size": 4096, 00:19:54.671 "runtime": 10.008703, 00:19:54.671 "iops": 6309.508834461369, 00:19:54.671 "mibps": 24.64651888461472, 00:19:54.671 "io_failed": 0, 00:19:54.671 "io_timeout": 0, 00:19:54.671 "avg_latency_us": 20252.651686575977, 00:19:54.671 "min_latency_us": 1206.4581818181819, 00:19:54.671 "max_latency_us": 3035150.8945454545 00:19:54.671 } 00:19:54.671 ], 00:19:54.671 "core_count": 1 00:19:54.671 } 00:19:54.671 03:22:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82403 00:19:54.671 03:22:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:54.671 03:22:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:19:54.671 Running I/O for 10 seconds... 00:19:55.609 03:22:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:55.609 7423.00 IOPS, 29.00 MiB/s [2024-10-09T03:22:38.912Z] [2024-10-09 03:22:38.840497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:69304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.609 [2024-10-09 03:22:38.840562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.609 [2024-10-09 03:22:38.840584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:69312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.609 [2024-10-09 03:22:38.840595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.609 [2024-10-09 03:22:38.840606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:69320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.609 [2024-10-09 03:22:38.840618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.609 [2024-10-09 03:22:38.840629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:69328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.609 [2024-10-09 03:22:38.840638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.609 [2024-10-09 03:22:38.840648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:69336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.609 [2024-10-09 03:22:38.840659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.609 [2024-10-09 03:22:38.840670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:69344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.609 [2024-10-09 03:22:38.840679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.609 [2024-10-09 03:22:38.840689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:69352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.609 [2024-10-09 03:22:38.840697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.609 [2024-10-09 03:22:38.840707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:69360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.609 [2024-10-09 03:22:38.840717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.609 [2024-10-09 03:22:38.840727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:69368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.609 [2024-10-09 03:22:38.840736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.609 [2024-10-09 03:22:38.840747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:69376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.609 [2024-10-09 03:22:38.840756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.609 [2024-10-09 03:22:38.840767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:69384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.609 [2024-10-09 03:22:38.840777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.609 [2024-10-09 03:22:38.840788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:69392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.609 [2024-10-09 03:22:38.840797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.609 [2024-10-09 03:22:38.840808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:68728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.609 [2024-10-09 03:22:38.840817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.840827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:68736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.610 [2024-10-09 03:22:38.840836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.840847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:68744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.610 [2024-10-09 03:22:38.840856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.840866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:68752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.610 [2024-10-09 03:22:38.840875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.840886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:68760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.610 [2024-10-09 03:22:38.840895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.840910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:68768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.610 [2024-10-09 03:22:38.840920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.840931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:68776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.610 [2024-10-09 03:22:38.840940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.840951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:68784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.610 [2024-10-09 03:22:38.840959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.840970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.610 [2024-10-09 03:22:38.840979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.840989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.610 [2024-10-09 03:22:38.840998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.841008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:68808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.610 [2024-10-09 03:22:38.841016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.841027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:68816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.610 [2024-10-09 03:22:38.841036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.841057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:68824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.610 [2024-10-09 03:22:38.841069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.841080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.610 [2024-10-09 03:22:38.841089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.841100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:68840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.610 [2024-10-09 03:22:38.841108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.841118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:68848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.610 [2024-10-09 03:22:38.841127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.841137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:69400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.610 [2024-10-09 03:22:38.841147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.841157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:69408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.610 [2024-10-09 03:22:38.841166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.841177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:69416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.610 [2024-10-09 03:22:38.841186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.841196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:69424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.610 [2024-10-09 03:22:38.841204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.841215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:69432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.610 [2024-10-09 03:22:38.841224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.841235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:69440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.610 [2024-10-09 03:22:38.841245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.841255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:69448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.610 [2024-10-09 03:22:38.841264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.841274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:69456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.610 [2024-10-09 03:22:38.841283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.841293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:69464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.610 [2024-10-09 03:22:38.841302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.841312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.610 [2024-10-09 03:22:38.841321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.841331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:69480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.610 [2024-10-09 03:22:38.841340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.841350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:69488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.610 [2024-10-09 03:22:38.841359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.841369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:69496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.610 [2024-10-09 03:22:38.841378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.841389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:69504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.610 [2024-10-09 03:22:38.841398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.841408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:68856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.610 [2024-10-09 03:22:38.841417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.841428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:68864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.610 [2024-10-09 03:22:38.841438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.841448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:68872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.610 [2024-10-09 03:22:38.841457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.841467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:68880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.610 [2024-10-09 03:22:38.841476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.841487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:68888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.610 [2024-10-09 03:22:38.841495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.841506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:68896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.610 [2024-10-09 03:22:38.841514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.841524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:68904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.610 [2024-10-09 03:22:38.841533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.841545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.610 [2024-10-09 03:22:38.841553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.841564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:69512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.610 [2024-10-09 03:22:38.841572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.841582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:69520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.610 [2024-10-09 03:22:38.841591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.841601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:69528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.610 [2024-10-09 03:22:38.841610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.610 [2024-10-09 03:22:38.841621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:69536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.611 [2024-10-09 03:22:38.841629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.841639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:69544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.611 [2024-10-09 03:22:38.841648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.841659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:69552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.611 [2024-10-09 03:22:38.841668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.841678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:69560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.611 [2024-10-09 03:22:38.841703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.841714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:69568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.611 [2024-10-09 03:22:38.841723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.841734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:69576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.611 [2024-10-09 03:22:38.841742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.841753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:69584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.611 [2024-10-09 03:22:38.841762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.841772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:69592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.611 [2024-10-09 03:22:38.841782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.841792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:69600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.611 [2024-10-09 03:22:38.841801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.841811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:69608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.611 [2024-10-09 03:22:38.841819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.841830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:69616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.611 [2024-10-09 03:22:38.841838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.841849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:68920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.611 [2024-10-09 03:22:38.841858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.841869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:68928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.611 [2024-10-09 03:22:38.841878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.841889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:68936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.611 [2024-10-09 03:22:38.841898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.841908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:68944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.611 [2024-10-09 03:22:38.841916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.841926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:68952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.611 [2024-10-09 03:22:38.841936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.841946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:68960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.611 [2024-10-09 03:22:38.841955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.841965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.611 [2024-10-09 03:22:38.841975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.841986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:68976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.611 [2024-10-09 03:22:38.841994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.842005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:68984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.611 [2024-10-09 03:22:38.842015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.842036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:68992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.611 [2024-10-09 03:22:38.842054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.842068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:69000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.611 [2024-10-09 03:22:38.842077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.842088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:69008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.611 [2024-10-09 03:22:38.842097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.842116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:69016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.611 [2024-10-09 03:22:38.842142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.842154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:69024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.611 [2024-10-09 03:22:38.842163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.842173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.611 [2024-10-09 03:22:38.842183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.842195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.611 [2024-10-09 03:22:38.842204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.842214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:69048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.611 [2024-10-09 03:22:38.842224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.842235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:69056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.611 [2024-10-09 03:22:38.842244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.842255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:69064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.611 [2024-10-09 03:22:38.842264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.842275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:69072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.611 [2024-10-09 03:22:38.842285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.842297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:69080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.611 [2024-10-09 03:22:38.842306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.842316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.611 [2024-10-09 03:22:38.842325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.842335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.611 [2024-10-09 03:22:38.842344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.842354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.611 [2024-10-09 03:22:38.842363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.842373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:69624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.611 [2024-10-09 03:22:38.842382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.842399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:69632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.611 [2024-10-09 03:22:38.842409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.842420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:69640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.611 [2024-10-09 03:22:38.842430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.842440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:69648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.611 [2024-10-09 03:22:38.842449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.611 [2024-10-09 03:22:38.842460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:69656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.611 [2024-10-09 03:22:38.842470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.842480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:69664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.612 [2024-10-09 03:22:38.842489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.842499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:69672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.612 [2024-10-09 03:22:38.842508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.842518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:69680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.612 [2024-10-09 03:22:38.842527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.842538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:69112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.612 [2024-10-09 03:22:38.842562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.842572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:69120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.612 [2024-10-09 03:22:38.842581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.842592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:69128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.612 [2024-10-09 03:22:38.842601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.842611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:69136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.612 [2024-10-09 03:22:38.842620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.842631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:69144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.612 [2024-10-09 03:22:38.842640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.842651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:69152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.612 [2024-10-09 03:22:38.842660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.842670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:69160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.612 [2024-10-09 03:22:38.842678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.842689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:69168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.612 [2024-10-09 03:22:38.842698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.842709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:69176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.612 [2024-10-09 03:22:38.842718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.842734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:69184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.612 [2024-10-09 03:22:38.842743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.842754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:69192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.612 [2024-10-09 03:22:38.842763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.842773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:69200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.612 [2024-10-09 03:22:38.842782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.842793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:69208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.612 [2024-10-09 03:22:38.842801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.842811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:69216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.612 [2024-10-09 03:22:38.842821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.842831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:69224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.612 [2024-10-09 03:22:38.842840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.842851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:69232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.612 [2024-10-09 03:22:38.842871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.842881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:69688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.612 [2024-10-09 03:22:38.842890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.842900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:69696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.612 [2024-10-09 03:22:38.842908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.842919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:69704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.612 [2024-10-09 03:22:38.842928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.842938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:69712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.612 [2024-10-09 03:22:38.842947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.842959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:69720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.612 [2024-10-09 03:22:38.842968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.842978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:69728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.612 [2024-10-09 03:22:38.842987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.842998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:69736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.612 [2024-10-09 03:22:38.843007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.843017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:69744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:55.612 [2024-10-09 03:22:38.843025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.843035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.612 [2024-10-09 03:22:38.843044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.843060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.612 [2024-10-09 03:22:38.843069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.843089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:69256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.612 [2024-10-09 03:22:38.843099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.843110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:69264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.612 [2024-10-09 03:22:38.843119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.843129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:69272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.612 [2024-10-09 03:22:38.843138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.843148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:69280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.612 [2024-10-09 03:22:38.843157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.843167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:69288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.612 [2024-10-09 03:22:38.843176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.843186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15490 is same with the state(6) to be set 00:19:55.612 [2024-10-09 03:22:38.843205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:55.612 [2024-10-09 03:22:38.843213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:55.612 [2024-10-09 03:22:38.843221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69296 len:8 PRP1 0x0 PRP2 0x0 00:19:55.612 [2024-10-09 03:22:38.843229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.843296] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa15490 was disconnected and freed. reset controller. 00:19:55.612 [2024-10-09 03:22:38.843373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:55.612 [2024-10-09 03:22:38.843393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.843405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:55.612 [2024-10-09 03:22:38.843414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.843425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:55.612 [2024-10-09 03:22:38.843434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.612 [2024-10-09 03:22:38.843443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:55.612 [2024-10-09 03:22:38.843452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.613 [2024-10-09 03:22:38.843461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a62e0 is same with the state(6) to be set 00:19:55.613 [2024-10-09 03:22:38.843647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:55.613 [2024-10-09 03:22:38.843676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a62e0 (9): Bad file descriptor 00:19:55.613 [2024-10-09 03:22:38.843802] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:55.613 [2024-10-09 03:22:38.843830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a62e0 with addr=10.0.0.3, port=4420 00:19:55.613 [2024-10-09 03:22:38.843842] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a62e0 is same with the state(6) to be set 00:19:55.613 [2024-10-09 03:22:38.843861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a62e0 (9): Bad file descriptor 00:19:55.613 [2024-10-09 03:22:38.843876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:55.613 [2024-10-09 03:22:38.843886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:55.613 [2024-10-09 03:22:38.843897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:55.613 [2024-10-09 03:22:38.843915] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:55.613 [2024-10-09 03:22:38.843925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:55.613 03:22:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:19:56.550 4295.50 IOPS, 16.78 MiB/s [2024-10-09T03:22:39.853Z] [2024-10-09 03:22:39.844140] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:56.550 [2024-10-09 03:22:39.844294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a62e0 with addr=10.0.0.3, port=4420 00:19:56.550 [2024-10-09 03:22:39.844310] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a62e0 is same with the state(6) to be set 00:19:56.550 [2024-10-09 03:22:39.844341] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a62e0 (9): Bad file descriptor 00:19:56.550 [2024-10-09 03:22:39.844362] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:56.550 [2024-10-09 03:22:39.844372] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:56.550 [2024-10-09 03:22:39.844382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:56.550 [2024-10-09 03:22:39.844413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:56.550 [2024-10-09 03:22:39.844425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:57.746 2863.67 IOPS, 11.19 MiB/s [2024-10-09T03:22:41.049Z] [2024-10-09 03:22:40.844660] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:57.746 [2024-10-09 03:22:40.844743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a62e0 with addr=10.0.0.3, port=4420 00:19:57.746 [2024-10-09 03:22:40.844760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a62e0 is same with the state(6) to be set 00:19:57.746 [2024-10-09 03:22:40.844787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a62e0 (9): Bad file descriptor 00:19:57.746 [2024-10-09 03:22:40.844807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:57.746 [2024-10-09 03:22:40.844819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:57.746 [2024-10-09 03:22:40.844829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:57.746 [2024-10-09 03:22:40.844855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:57.746 [2024-10-09 03:22:40.844867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:58.683 2147.75 IOPS, 8.39 MiB/s [2024-10-09T03:22:41.986Z] [2024-10-09 03:22:41.848569] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.683 [2024-10-09 03:22:41.848642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a62e0 with addr=10.0.0.3, port=4420 00:19:58.683 [2024-10-09 03:22:41.848659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a62e0 is same with the state(6) to be set 00:19:58.683 [2024-10-09 03:22:41.848883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a62e0 (9): Bad file descriptor 00:19:58.683 [2024-10-09 03:22:41.849177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:58.683 [2024-10-09 03:22:41.849199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:58.683 [2024-10-09 03:22:41.849211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:58.683 [2024-10-09 03:22:41.853069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.683 [2024-10-09 03:22:41.853118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:58.683 03:22:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:58.941 [2024-10-09 03:22:42.142976] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:58.941 03:22:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82403 00:19:59.776 1718.20 IOPS, 6.71 MiB/s [2024-10-09T03:22:43.079Z] [2024-10-09 03:22:42.894169] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:01.650 2767.67 IOPS, 10.81 MiB/s [2024-10-09T03:22:45.889Z] 3699.14 IOPS, 14.45 MiB/s [2024-10-09T03:22:46.837Z] 4421.75 IOPS, 17.27 MiB/s [2024-10-09T03:22:47.772Z] 4962.44 IOPS, 19.38 MiB/s [2024-10-09T03:22:47.772Z] 5386.20 IOPS, 21.04 MiB/s 00:20:04.469 Latency(us) 00:20:04.469 [2024-10-09T03:22:47.772Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.469 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:04.469 Verification LBA range: start 0x0 length 0x4000 00:20:04.469 NVMe0n1 : 10.01 5393.32 21.07 3782.12 0.00 13924.93 651.64 3019898.88 00:20:04.469 [2024-10-09T03:22:47.772Z] =================================================================================================================== 00:20:04.469 [2024-10-09T03:22:47.772Z] Total : 5393.32 21.07 3782.12 0.00 13924.93 0.00 3019898.88 00:20:04.469 { 00:20:04.469 "results": [ 00:20:04.469 { 00:20:04.469 "job": "NVMe0n1", 00:20:04.469 "core_mask": "0x4", 00:20:04.469 "workload": "verify", 00:20:04.469 "status": "finished", 00:20:04.469 "verify_range": { 00:20:04.469 "start": 0, 00:20:04.469 "length": 16384 00:20:04.469 }, 00:20:04.469 "queue_depth": 128, 00:20:04.469 "io_size": 4096, 00:20:04.469 "runtime": 10.010527, 00:20:04.470 "iops": 5393.322449457456, 00:20:04.470 "mibps": 21.067665818193188, 00:20:04.470 "io_failed": 37861, 00:20:04.470 "io_timeout": 0, 00:20:04.470 "avg_latency_us": 13924.931257322878, 00:20:04.470 "min_latency_us": 651.6363636363636, 00:20:04.470 "max_latency_us": 3019898.88 00:20:04.470 } 00:20:04.470 ], 00:20:04.470 "core_count": 1 00:20:04.470 } 00:20:04.470 03:22:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82275 00:20:04.470 03:22:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 82275 ']' 00:20:04.470 03:22:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 82275 00:20:04.470 03:22:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:20:04.470 03:22:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:04.470 03:22:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82275 00:20:04.470 03:22:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:04.470 03:22:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:04.470 killing process with pid 82275 00:20:04.470 03:22:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82275' 00:20:04.470 Received shutdown signal, test time was about 10.000000 seconds 00:20:04.470 00:20:04.470 Latency(us) 00:20:04.470 [2024-10-09T03:22:47.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.470 [2024-10-09T03:22:47.773Z] =================================================================================================================== 00:20:04.470 [2024-10-09T03:22:47.773Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:04.470 03:22:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 82275 00:20:04.470 03:22:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 82275 00:20:05.038 03:22:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82522 00:20:05.038 03:22:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:20:05.038 03:22:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82522 /var/tmp/bdevperf.sock 00:20:05.038 03:22:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 82522 ']' 00:20:05.038 03:22:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:05.038 03:22:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:05.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:05.038 03:22:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:05.038 03:22:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:05.038 03:22:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:05.038 [2024-10-09 03:22:48.109097] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:20:05.038 [2024-10-09 03:22:48.109184] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82522 ] 00:20:05.038 [2024-10-09 03:22:48.246312] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.038 [2024-10-09 03:22:48.336689] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:20:05.297 [2024-10-09 03:22:48.410371] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:05.865 03:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:05.865 03:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:20:05.865 03:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82534 00:20:05.865 03:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82522 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:20:05.865 03:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:20:06.124 03:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:20:06.383 NVMe0n1 00:20:06.641 03:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82576 00:20:06.641 03:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:06.641 03:22:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:20:06.641 Running I/O for 10 seconds... 00:20:07.577 03:22:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:07.840 14241.00 IOPS, 55.63 MiB/s [2024-10-09T03:22:51.143Z] [2024-10-09 03:22:50.966689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.966766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.966797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.966807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.966817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.966827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.966837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.966846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.966855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.966864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.966873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.966882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.966891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.966916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.966925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.966934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.966958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.966967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.966976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.966984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.966993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.967001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.967009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.967018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.967026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.967052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.967061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.967070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.967078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.967087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.967095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.967104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.967130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.967142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.967154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.967163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.967172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.967182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.967191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.967200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.967209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.967218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.967226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.967235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.967244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.967253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.967262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.967271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.967280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.967289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.967297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.967306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.840 [2024-10-09 03:22:50.967315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.967992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006250 is same with the state(6) to be set 00:20:07.841 [2024-10-09 03:22:50.968049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.841 [2024-10-09 03:22:50.968100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.841 [2024-10-09 03:22:50.968122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.841 [2024-10-09 03:22:50.968132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.841 [2024-10-09 03:22:50.968142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.841 [2024-10-09 03:22:50.968152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.841 [2024-10-09 03:22:50.968162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.841 [2024-10-09 03:22:50.968171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.841 [2024-10-09 03:22:50.968181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:54608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.841 [2024-10-09 03:22:50.968189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.841 [2024-10-09 03:22:50.968200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.841 [2024-10-09 03:22:50.968224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.841 [2024-10-09 03:22:50.968234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.841 [2024-10-09 03:22:50.968242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.841 [2024-10-09 03:22:50.968253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:89800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.841 [2024-10-09 03:22:50.968261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.841 [2024-10-09 03:22:50.968270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:66944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:85016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:44728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:121824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:57160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:112200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:60944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:43032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:30016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:90168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:89416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:115600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:60896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:124552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:76992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:108264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:118624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.968990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:73192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.968999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.969009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:89616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.969017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.842 [2024-10-09 03:22:50.969027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.842 [2024-10-09 03:22:50.969035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:56960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:58344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:68496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:88008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:29072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:33416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:103664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:123248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:85288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:42112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:125208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:68400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:130336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:86696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:46864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:26544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:52488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:109152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:36376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:119832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:47448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:129624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.843 [2024-10-09 03:22:50.969830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.843 [2024-10-09 03:22:50.969838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.969847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:108168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.969855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.969870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:30904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.969878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.969888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:27752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.969896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.969905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.969914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.969923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.969933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.969943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:104488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.969951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.969961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:73512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.969969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.969978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.969987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.969996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:45712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.970004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.970018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.970027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.970036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:67200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.970044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.970065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:34744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.970074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.970083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:121712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.970092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.970102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.970118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.970144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:129784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.970153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.970163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:86672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.970171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.970181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:88240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.970189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.970205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.970213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.970223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:35472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.970232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.970242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:57488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.970250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.970260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.970268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.970277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.970285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.970295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.970303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.970312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.970321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.970331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.970339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.970356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.970364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.970374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:114344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.970382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.970392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.970400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.970410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:103008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.970418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.970428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.970436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.970446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.970454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.970464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:67224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.970472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.970486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:68624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.970494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.970509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.970517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.844 [2024-10-09 03:22:50.970528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.844 [2024-10-09 03:22:50.970536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.845 [2024-10-09 03:22:50.970545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdbd00 is same with the state(6) to be set 00:20:07.845 [2024-10-09 03:22:50.970574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.845 [2024-10-09 03:22:50.970582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.845 [2024-10-09 03:22:50.970589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14832 len:8 PRP1 0x0 PRP2 0x0 00:20:07.845 [2024-10-09 03:22:50.970597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.845 [2024-10-09 03:22:50.970654] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fdbd00 was disconnected and freed. reset controller. 00:20:07.845 [2024-10-09 03:22:50.970728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.845 [2024-10-09 03:22:50.970742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.845 [2024-10-09 03:22:50.970752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.845 [2024-10-09 03:22:50.970760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.845 [2024-10-09 03:22:50.970768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.845 [2024-10-09 03:22:50.970777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.845 [2024-10-09 03:22:50.970792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.845 [2024-10-09 03:22:50.970800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.845 [2024-10-09 03:22:50.970808] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6e2e0 is same with the state(6) to be set 00:20:07.845 [2024-10-09 03:22:50.971019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:07.845 [2024-10-09 03:22:50.971156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6e2e0 (9): Bad file descriptor 00:20:07.845 [2024-10-09 03:22:50.971263] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.845 [2024-10-09 03:22:50.971284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f6e2e0 with addr=10.0.0.3, port=4420 00:20:07.845 [2024-10-09 03:22:50.971295] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6e2e0 is same with the state(6) to be set 00:20:07.845 [2024-10-09 03:22:50.971311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6e2e0 (9): Bad file descriptor 00:20:07.845 [2024-10-09 03:22:50.971326] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:07.845 [2024-10-09 03:22:50.971334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:07.845 [2024-10-09 03:22:50.971344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:07.845 [2024-10-09 03:22:50.971362] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.845 [2024-10-09 03:22:50.987944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:07.845 03:22:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82576 00:20:09.718 8084.00 IOPS, 31.58 MiB/s [2024-10-09T03:22:53.021Z] 5389.33 IOPS, 21.05 MiB/s [2024-10-09T03:22:53.021Z] [2024-10-09 03:22:52.988135] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.718 [2024-10-09 03:22:52.988182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f6e2e0 with addr=10.0.0.3, port=4420 00:20:09.718 [2024-10-09 03:22:52.988223] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6e2e0 is same with the state(6) to be set 00:20:09.718 [2024-10-09 03:22:52.988263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6e2e0 (9): Bad file descriptor 00:20:09.718 [2024-10-09 03:22:52.988283] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:09.718 [2024-10-09 03:22:52.988292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:09.718 [2024-10-09 03:22:52.988302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:09.718 [2024-10-09 03:22:52.988323] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.718 [2024-10-09 03:22:52.988334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:11.591 4042.00 IOPS, 15.79 MiB/s [2024-10-09T03:22:55.153Z] 3233.60 IOPS, 12.63 MiB/s [2024-10-09T03:22:55.153Z] [2024-10-09 03:22:54.988472] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.850 [2024-10-09 03:22:54.988521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f6e2e0 with addr=10.0.0.3, port=4420 00:20:11.850 [2024-10-09 03:22:54.988536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6e2e0 is same with the state(6) to be set 00:20:11.850 [2024-10-09 03:22:54.988571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6e2e0 (9): Bad file descriptor 00:20:11.850 [2024-10-09 03:22:54.988588] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:11.850 [2024-10-09 03:22:54.988597] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:11.850 [2024-10-09 03:22:54.988607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:11.850 [2024-10-09 03:22:54.988628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.850 [2024-10-09 03:22:54.988638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:13.721 2694.67 IOPS, 10.53 MiB/s [2024-10-09T03:22:57.024Z] 2309.71 IOPS, 9.02 MiB/s [2024-10-09T03:22:57.024Z] [2024-10-09 03:22:56.988707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:13.721 [2024-10-09 03:22:56.988930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:13.721 [2024-10-09 03:22:56.989144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:13.721 [2024-10-09 03:22:56.989314] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:20:13.721 [2024-10-09 03:22:56.989374] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.918 2021.00 IOPS, 7.89 MiB/s 00:20:14.918 Latency(us) 00:20:14.918 [2024-10-09T03:22:58.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.918 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:20:14.918 NVMe0n1 : 8.14 1985.15 7.75 15.72 0.00 63863.20 1199.01 7015926.69 00:20:14.918 [2024-10-09T03:22:58.221Z] =================================================================================================================== 00:20:14.918 [2024-10-09T03:22:58.221Z] Total : 1985.15 7.75 15.72 0.00 63863.20 1199.01 7015926.69 00:20:14.918 { 00:20:14.918 "results": [ 00:20:14.918 { 00:20:14.918 "job": "NVMe0n1", 00:20:14.918 "core_mask": "0x4", 00:20:14.918 "workload": "randread", 00:20:14.918 "status": "finished", 00:20:14.918 "queue_depth": 128, 00:20:14.918 "io_size": 4096, 00:20:14.918 "runtime": 8.144465, 00:20:14.918 "iops": 1985.1518792210416, 00:20:14.918 "mibps": 7.754499528207194, 00:20:14.918 "io_failed": 128, 00:20:14.918 "io_timeout": 0, 00:20:14.918 "avg_latency_us": 63863.20389164101, 00:20:14.918 "min_latency_us": 1199.010909090909, 00:20:14.918 "max_latency_us": 7015926.69090909 00:20:14.918 } 00:20:14.918 ], 00:20:14.918 "core_count": 1 00:20:14.918 } 00:20:14.918 03:22:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:14.918 Attaching 5 probes... 00:20:14.918 1344.146227: reset bdev controller NVMe0 00:20:14.918 1344.336411: reconnect bdev controller NVMe0 00:20:14.918 3361.152204: reconnect delay bdev controller NVMe0 00:20:14.918 3361.168804: reconnect bdev controller NVMe0 00:20:14.918 5361.510272: reconnect delay bdev controller NVMe0 00:20:14.918 5361.526585: reconnect bdev controller NVMe0 00:20:14.918 7361.812500: reconnect delay bdev controller NVMe0 00:20:14.918 7361.829103: reconnect bdev controller NVMe0 00:20:14.918 03:22:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:20:14.918 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:20:14.918 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82534 00:20:14.918 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:14.918 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82522 00:20:14.918 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 82522 ']' 00:20:14.918 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 82522 00:20:14.918 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:20:14.918 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:14.918 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82522 00:20:14.918 killing process with pid 82522 00:20:14.918 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:14.918 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:14.918 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82522' 00:20:14.918 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 82522 00:20:14.918 Received shutdown signal, test time was about 8.217364 seconds 00:20:14.918 00:20:14.918 Latency(us) 00:20:14.918 [2024-10-09T03:22:58.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.918 [2024-10-09T03:22:58.221Z] =================================================================================================================== 00:20:14.918 [2024-10-09T03:22:58.221Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:14.918 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 82522 00:20:15.177 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:15.436 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:20:15.436 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:20:15.436 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:15.436 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:20:15.436 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:15.436 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:20:15.436 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:15.436 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:15.436 rmmod nvme_tcp 00:20:15.436 rmmod nvme_fabrics 00:20:15.436 rmmod nvme_keyring 00:20:15.436 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:15.436 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:20:15.436 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:20:15.436 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@515 -- # '[' -n 82078 ']' 00:20:15.436 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # killprocess 82078 00:20:15.436 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 82078 ']' 00:20:15.436 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 82078 00:20:15.436 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:20:15.436 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:15.436 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82078 00:20:15.436 killing process with pid 82078 00:20:15.436 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:15.436 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:15.436 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82078' 00:20:15.436 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 82078 00:20:15.436 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 82078 00:20:15.695 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:15.695 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:15.695 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:15.695 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:20:15.695 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@789 -- # iptables-save 00:20:15.695 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:15.695 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@789 -- # iptables-restore 00:20:15.695 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:15.695 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:15.695 03:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:15.954 03:22:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:15.954 03:22:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:15.954 03:22:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:15.954 03:22:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:15.954 03:22:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:15.954 03:22:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:15.954 03:22:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:15.954 03:22:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:15.954 03:22:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:15.954 03:22:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:15.954 03:22:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:15.954 03:22:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:15.954 03:22:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:15.954 03:22:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.954 03:22:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:15.954 03:22:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.954 03:22:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:20:15.954 00:20:15.954 real 0m48.228s 00:20:15.954 user 2m20.949s 00:20:15.954 sys 0m6.036s 00:20:15.954 ************************************ 00:20:15.954 END TEST nvmf_timeout 00:20:15.954 ************************************ 00:20:15.954 03:22:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:15.954 03:22:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:15.954 03:22:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:20:15.954 03:22:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:20:15.954 ************************************ 00:20:15.954 END TEST nvmf_host 00:20:15.954 ************************************ 00:20:15.954 00:20:15.954 real 5m19.321s 00:20:15.954 user 13m48.718s 00:20:15.954 sys 1m11.631s 00:20:15.954 03:22:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:15.954 03:22:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.213 03:22:59 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:20:16.213 03:22:59 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:20:16.213 ************************************ 00:20:16.213 END TEST nvmf_tcp 00:20:16.213 ************************************ 00:20:16.213 00:20:16.213 real 13m10.530s 00:20:16.213 user 31m40.344s 00:20:16.213 sys 3m13.927s 00:20:16.213 03:22:59 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:16.213 03:22:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:16.213 03:22:59 -- spdk/autotest.sh@281 -- # [[ 1 -eq 0 ]] 00:20:16.213 03:22:59 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:16.213 03:22:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:16.213 03:22:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:16.213 03:22:59 -- common/autotest_common.sh@10 -- # set +x 00:20:16.213 ************************************ 00:20:16.213 START TEST nvmf_dif 00:20:16.213 ************************************ 00:20:16.213 03:22:59 nvmf_dif -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:16.213 * Looking for test storage... 00:20:16.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:16.213 03:22:59 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:16.213 03:22:59 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:20:16.213 03:22:59 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:16.213 03:22:59 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:16.213 03:22:59 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:16.213 03:22:59 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:16.213 03:22:59 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:16.213 03:22:59 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:20:16.213 03:22:59 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:20:16.213 03:22:59 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:20:16.213 03:22:59 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:20:16.213 03:22:59 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:20:16.213 03:22:59 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:20:16.213 03:22:59 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:20:16.213 03:22:59 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:16.213 03:22:59 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:20:16.213 03:22:59 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:20:16.213 03:22:59 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:16.213 03:22:59 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:16.213 03:22:59 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:20:16.213 03:22:59 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:20:16.213 03:22:59 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:16.213 03:22:59 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:20:16.213 03:22:59 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:20:16.213 03:22:59 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:20:16.213 03:22:59 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:20:16.213 03:22:59 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:16.213 03:22:59 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:20:16.213 03:22:59 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:20:16.213 03:22:59 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:16.213 03:22:59 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:16.213 03:22:59 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:20:16.213 03:22:59 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:16.473 03:22:59 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:16.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.473 --rc genhtml_branch_coverage=1 00:20:16.473 --rc genhtml_function_coverage=1 00:20:16.473 --rc genhtml_legend=1 00:20:16.473 --rc geninfo_all_blocks=1 00:20:16.473 --rc geninfo_unexecuted_blocks=1 00:20:16.473 00:20:16.473 ' 00:20:16.473 03:22:59 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:16.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.473 --rc genhtml_branch_coverage=1 00:20:16.473 --rc genhtml_function_coverage=1 00:20:16.473 --rc genhtml_legend=1 00:20:16.473 --rc geninfo_all_blocks=1 00:20:16.473 --rc geninfo_unexecuted_blocks=1 00:20:16.473 00:20:16.473 ' 00:20:16.473 03:22:59 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:16.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.473 --rc genhtml_branch_coverage=1 00:20:16.473 --rc genhtml_function_coverage=1 00:20:16.473 --rc genhtml_legend=1 00:20:16.473 --rc geninfo_all_blocks=1 00:20:16.473 --rc geninfo_unexecuted_blocks=1 00:20:16.473 00:20:16.473 ' 00:20:16.473 03:22:59 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:16.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.473 --rc genhtml_branch_coverage=1 00:20:16.473 --rc genhtml_function_coverage=1 00:20:16.473 --rc genhtml_legend=1 00:20:16.473 --rc geninfo_all_blocks=1 00:20:16.473 --rc geninfo_unexecuted_blocks=1 00:20:16.473 00:20:16.473 ' 00:20:16.473 03:22:59 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:16.473 03:22:59 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:20:16.473 03:22:59 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:16.473 03:22:59 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:16.473 03:22:59 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:16.473 03:22:59 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.473 03:22:59 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.473 03:22:59 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.473 03:22:59 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:20:16.473 03:22:59 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:16.473 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:16.473 03:22:59 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:20:16.473 03:22:59 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:20:16.473 03:22:59 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:20:16.473 03:22:59 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:20:16.473 03:22:59 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.473 03:22:59 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:16.473 03:22:59 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.473 03:22:59 nvmf_dif -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@458 -- # nvmf_veth_init 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:16.474 Cannot find device "nvmf_init_br" 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@162 -- # true 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:16.474 Cannot find device "nvmf_init_br2" 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@163 -- # true 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:16.474 Cannot find device "nvmf_tgt_br" 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@164 -- # true 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:16.474 Cannot find device "nvmf_tgt_br2" 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@165 -- # true 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:16.474 Cannot find device "nvmf_init_br" 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@166 -- # true 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:16.474 Cannot find device "nvmf_init_br2" 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@167 -- # true 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:16.474 Cannot find device "nvmf_tgt_br" 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@168 -- # true 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:16.474 Cannot find device "nvmf_tgt_br2" 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@169 -- # true 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:16.474 Cannot find device "nvmf_br" 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@170 -- # true 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:16.474 Cannot find device "nvmf_init_if" 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@171 -- # true 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:16.474 Cannot find device "nvmf_init_if2" 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@172 -- # true 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:16.474 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@173 -- # true 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:16.474 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@174 -- # true 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:16.474 03:22:59 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:16.733 03:22:59 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:16.733 03:22:59 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:16.733 03:22:59 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:16.733 03:22:59 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:16.733 03:22:59 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:16.733 03:22:59 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:16.733 03:22:59 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:16.733 03:22:59 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:16.733 03:22:59 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:16.733 03:22:59 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:16.733 03:22:59 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:16.733 03:22:59 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:16.733 03:22:59 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:16.733 03:22:59 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:16.733 03:22:59 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:16.733 03:22:59 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:16.733 03:22:59 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:16.733 03:22:59 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:16.733 03:22:59 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:16.733 03:22:59 nvmf_dif -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:16.733 03:22:59 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:16.733 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:16.733 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:20:16.733 00:20:16.733 --- 10.0.0.3 ping statistics --- 00:20:16.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.733 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:20:16.733 03:22:59 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:16.733 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:16.733 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:20:16.733 00:20:16.733 --- 10.0.0.4 ping statistics --- 00:20:16.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.733 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:20:16.733 03:22:59 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:16.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:16.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:20:16.733 00:20:16.733 --- 10.0.0.1 ping statistics --- 00:20:16.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.733 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:20:16.733 03:22:59 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:16.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:16.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:20:16.733 00:20:16.733 --- 10.0.0.2 ping statistics --- 00:20:16.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.733 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:20:16.733 03:22:59 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:16.733 03:22:59 nvmf_dif -- nvmf/common.sh@459 -- # return 0 00:20:16.733 03:22:59 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:20:16.733 03:22:59 nvmf_dif -- nvmf/common.sh@477 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:16.991 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:16.991 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:16.991 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:17.250 03:23:00 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:17.250 03:23:00 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:17.250 03:23:00 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:17.250 03:23:00 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:17.250 03:23:00 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:17.250 03:23:00 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:17.250 03:23:00 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:20:17.250 03:23:00 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:20:17.250 03:23:00 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:17.250 03:23:00 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:17.250 03:23:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:17.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.250 03:23:00 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=83074 00:20:17.250 03:23:00 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 83074 00:20:17.250 03:23:00 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:17.250 03:23:00 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 83074 ']' 00:20:17.250 03:23:00 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.250 03:23:00 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:17.250 03:23:00 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.250 03:23:00 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:17.250 03:23:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:17.250 [2024-10-09 03:23:00.405184] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:20:17.250 [2024-10-09 03:23:00.405273] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.250 [2024-10-09 03:23:00.541677] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.509 [2024-10-09 03:23:00.656897] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.509 [2024-10-09 03:23:00.656990] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.509 [2024-10-09 03:23:00.657015] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:17.509 [2024-10-09 03:23:00.657031] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:17.509 [2024-10-09 03:23:00.657066] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.509 [2024-10-09 03:23:00.657582] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.509 [2024-10-09 03:23:00.714390] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:18.454 03:23:01 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:18.454 03:23:01 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:20:18.454 03:23:01 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:18.454 03:23:01 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:18.454 03:23:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:18.454 03:23:01 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:18.454 03:23:01 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:20:18.454 03:23:01 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:20:18.454 03:23:01 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.454 03:23:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:18.454 [2024-10-09 03:23:01.503610] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:18.454 03:23:01 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.454 03:23:01 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:20:18.454 03:23:01 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:18.454 03:23:01 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:18.454 03:23:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:18.454 ************************************ 00:20:18.454 START TEST fio_dif_1_default 00:20:18.454 ************************************ 00:20:18.454 03:23:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:20:18.454 03:23:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:20:18.454 03:23:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:20:18.454 03:23:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:20:18.454 03:23:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:20:18.454 03:23:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:20:18.454 03:23:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:18.454 03:23:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.454 03:23:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:18.454 bdev_null0 00:20:18.454 03:23:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.454 03:23:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:18.454 03:23:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.454 03:23:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:18.454 03:23:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.454 03:23:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:18.454 03:23:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:18.455 [2024-10-09 03:23:01.555738] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:18.455 { 00:20:18.455 "params": { 00:20:18.455 "name": "Nvme$subsystem", 00:20:18.455 "trtype": "$TEST_TRANSPORT", 00:20:18.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.455 "adrfam": "ipv4", 00:20:18.455 "trsvcid": "$NVMF_PORT", 00:20:18.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.455 "hdgst": ${hdgst:-false}, 00:20:18.455 "ddgst": ${ddgst:-false} 00:20:18.455 }, 00:20:18.455 "method": "bdev_nvme_attach_controller" 00:20:18.455 } 00:20:18.455 EOF 00:20:18.455 )") 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:20:18.455 "params": { 00:20:18.455 "name": "Nvme0", 00:20:18.455 "trtype": "tcp", 00:20:18.455 "traddr": "10.0.0.3", 00:20:18.455 "adrfam": "ipv4", 00:20:18.455 "trsvcid": "4420", 00:20:18.455 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:18.455 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:18.455 "hdgst": false, 00:20:18.455 "ddgst": false 00:20:18.455 }, 00:20:18.455 "method": "bdev_nvme_attach_controller" 00:20:18.455 }' 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:18.455 03:23:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:18.713 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:18.713 fio-3.35 00:20:18.713 Starting 1 thread 00:20:30.919 00:20:30.919 filename0: (groupid=0, jobs=1): err= 0: pid=83136: Wed Oct 9 03:23:12 2024 00:20:30.919 read: IOPS=9561, BW=37.4MiB/s (39.2MB/s)(374MiB/10001msec) 00:20:30.919 slat (nsec): min=5811, max=89505, avg=8402.66, stdev=3661.63 00:20:30.919 clat (usec): min=317, max=2949, avg=393.58, stdev=57.07 00:20:30.919 lat (usec): min=323, max=2982, avg=401.99, stdev=57.49 00:20:30.919 clat percentiles (usec): 00:20:30.919 | 1.00th=[ 330], 5.00th=[ 338], 10.00th=[ 347], 20.00th=[ 355], 00:20:30.919 | 30.00th=[ 363], 40.00th=[ 371], 50.00th=[ 379], 60.00th=[ 392], 00:20:30.919 | 70.00th=[ 404], 80.00th=[ 424], 90.00th=[ 465], 95.00th=[ 502], 00:20:30.919 | 99.00th=[ 562], 99.50th=[ 586], 99.90th=[ 644], 99.95th=[ 922], 00:20:30.919 | 99.99th=[ 1991] 00:20:30.919 bw ( KiB/s): min=29440, max=40576, per=100.00%, avg=38362.95, stdev=3001.21, samples=19 00:20:30.919 iops : min= 7360, max=10144, avg=9590.74, stdev=750.30, samples=19 00:20:30.919 lat (usec) : 500=94.68%, 750=5.25%, 1000=0.05% 00:20:30.919 lat (msec) : 2=0.02%, 4=0.01% 00:20:30.919 cpu : usr=83.52%, sys=14.33%, ctx=16, majf=0, minf=9 00:20:30.919 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:30.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.919 issued rwts: total=95628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.919 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:30.919 00:20:30.919 Run status group 0 (all jobs): 00:20:30.919 READ: bw=37.4MiB/s (39.2MB/s), 37.4MiB/s-37.4MiB/s (39.2MB/s-39.2MB/s), io=374MiB (392MB), run=10001-10001msec 00:20:30.919 03:23:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:20:30.919 03:23:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:20:30.919 03:23:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:20:30.919 03:23:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:30.919 03:23:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:20:30.919 03:23:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:30.919 03:23:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.919 03:23:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:30.919 03:23:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.919 03:23:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:30.919 03:23:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.919 03:23:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:30.919 03:23:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.919 00:20:30.919 real 0m11.078s 00:20:30.919 user 0m9.029s 00:20:30.919 sys 0m1.745s 00:20:30.919 03:23:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:30.919 03:23:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:30.919 ************************************ 00:20:30.919 END TEST fio_dif_1_default 00:20:30.919 ************************************ 00:20:30.919 03:23:12 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:20:30.919 03:23:12 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:30.919 03:23:12 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:30.919 03:23:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:30.919 ************************************ 00:20:30.919 START TEST fio_dif_1_multi_subsystems 00:20:30.919 ************************************ 00:20:30.919 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:20:30.919 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:20:30.919 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:20:30.919 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:20:30.919 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:30.919 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:20:30.919 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:20:30.919 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:30.919 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.919 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:30.919 bdev_null0 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:30.920 [2024-10-09 03:23:12.678047] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:30.920 bdev_null1 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:30.920 { 00:20:30.920 "params": { 00:20:30.920 "name": "Nvme$subsystem", 00:20:30.920 "trtype": "$TEST_TRANSPORT", 00:20:30.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.920 "adrfam": "ipv4", 00:20:30.920 "trsvcid": "$NVMF_PORT", 00:20:30.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.920 "hdgst": ${hdgst:-false}, 00:20:30.920 "ddgst": ${ddgst:-false} 00:20:30.920 }, 00:20:30.920 "method": "bdev_nvme_attach_controller" 00:20:30.920 } 00:20:30.920 EOF 00:20:30.920 )") 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:30.920 { 00:20:30.920 "params": { 00:20:30.920 "name": "Nvme$subsystem", 00:20:30.920 "trtype": "$TEST_TRANSPORT", 00:20:30.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.920 "adrfam": "ipv4", 00:20:30.920 "trsvcid": "$NVMF_PORT", 00:20:30.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.920 "hdgst": ${hdgst:-false}, 00:20:30.920 "ddgst": ${ddgst:-false} 00:20:30.920 }, 00:20:30.920 "method": "bdev_nvme_attach_controller" 00:20:30.920 } 00:20:30.920 EOF 00:20:30.920 )") 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:20:30.920 "params": { 00:20:30.920 "name": "Nvme0", 00:20:30.920 "trtype": "tcp", 00:20:30.920 "traddr": "10.0.0.3", 00:20:30.920 "adrfam": "ipv4", 00:20:30.920 "trsvcid": "4420", 00:20:30.920 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:30.920 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:30.920 "hdgst": false, 00:20:30.920 "ddgst": false 00:20:30.920 }, 00:20:30.920 "method": "bdev_nvme_attach_controller" 00:20:30.920 },{ 00:20:30.920 "params": { 00:20:30.920 "name": "Nvme1", 00:20:30.920 "trtype": "tcp", 00:20:30.920 "traddr": "10.0.0.3", 00:20:30.920 "adrfam": "ipv4", 00:20:30.920 "trsvcid": "4420", 00:20:30.920 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:30.920 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:30.920 "hdgst": false, 00:20:30.920 "ddgst": false 00:20:30.920 }, 00:20:30.920 "method": "bdev_nvme_attach_controller" 00:20:30.920 }' 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:30.920 03:23:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:30.920 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:30.920 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:30.920 fio-3.35 00:20:30.920 Starting 2 threads 00:20:40.933 00:20:40.933 filename0: (groupid=0, jobs=1): err= 0: pid=83296: Wed Oct 9 03:23:23 2024 00:20:40.933 read: IOPS=4212, BW=16.5MiB/s (17.3MB/s)(165MiB/10001msec) 00:20:40.933 slat (usec): min=5, max=103, avg=17.70, stdev=10.21 00:20:40.933 clat (usec): min=181, max=1819, avg=901.07, stdev=75.39 00:20:40.933 lat (usec): min=201, max=1850, avg=918.76, stdev=76.48 00:20:40.933 clat percentiles (usec): 00:20:40.933 | 1.00th=[ 709], 5.00th=[ 775], 10.00th=[ 807], 20.00th=[ 840], 00:20:40.933 | 30.00th=[ 865], 40.00th=[ 889], 50.00th=[ 906], 60.00th=[ 922], 00:20:40.933 | 70.00th=[ 938], 80.00th=[ 963], 90.00th=[ 996], 95.00th=[ 1020], 00:20:40.933 | 99.00th=[ 1074], 99.50th=[ 1090], 99.90th=[ 1123], 99.95th=[ 1139], 00:20:40.933 | 99.99th=[ 1205] 00:20:40.933 bw ( KiB/s): min=15968, max=18048, per=49.85%, avg=16796.63, stdev=661.97, samples=19 00:20:40.933 iops : min= 3992, max= 4512, avg=4198.95, stdev=165.19, samples=19 00:20:40.933 lat (usec) : 250=0.01%, 750=3.07%, 1000=88.72% 00:20:40.933 lat (msec) : 2=8.21% 00:20:40.933 cpu : usr=93.05%, sys=5.66%, ctx=6, majf=0, minf=0 00:20:40.933 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:40.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:40.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:40.933 issued rwts: total=42125,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:40.933 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:40.933 filename1: (groupid=0, jobs=1): err= 0: pid=83297: Wed Oct 9 03:23:23 2024 00:20:40.933 read: IOPS=4211, BW=16.5MiB/s (17.3MB/s)(165MiB/10001msec) 00:20:40.933 slat (usec): min=5, max=131, avg=19.82, stdev=12.00 00:20:40.933 clat (usec): min=585, max=1382, avg=893.16, stdev=85.27 00:20:40.933 lat (usec): min=591, max=1407, avg=912.98, stdev=89.11 00:20:40.933 clat percentiles (usec): 00:20:40.933 | 1.00th=[ 693], 5.00th=[ 758], 10.00th=[ 783], 20.00th=[ 824], 00:20:40.933 | 30.00th=[ 848], 40.00th=[ 873], 50.00th=[ 889], 60.00th=[ 914], 00:20:40.933 | 70.00th=[ 938], 80.00th=[ 963], 90.00th=[ 1004], 95.00th=[ 1037], 00:20:40.933 | 99.00th=[ 1090], 99.50th=[ 1123], 99.90th=[ 1156], 99.95th=[ 1188], 00:20:40.933 | 99.99th=[ 1237] 00:20:40.933 bw ( KiB/s): min=15968, max=18016, per=49.85%, avg=16796.63, stdev=658.91, samples=19 00:20:40.933 iops : min= 3992, max= 4504, avg=4199.16, stdev=164.73, samples=19 00:20:40.933 lat (usec) : 750=4.45%, 1000=84.93% 00:20:40.933 lat (msec) : 2=10.62% 00:20:40.933 cpu : usr=92.57%, sys=5.90%, ctx=7, majf=0, minf=0 00:20:40.933 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:40.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:40.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:40.933 issued rwts: total=42124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:40.933 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:40.933 00:20:40.933 Run status group 0 (all jobs): 00:20:40.933 READ: bw=32.9MiB/s (34.5MB/s), 16.5MiB/s-16.5MiB/s (17.3MB/s-17.3MB/s), io=329MiB (345MB), run=10001-10001msec 00:20:40.933 03:23:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:40.933 03:23:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:20:40.933 03:23:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:40.933 03:23:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:40.933 03:23:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:20:40.933 03:23:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:40.933 03:23:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.933 03:23:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:40.933 03:23:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.933 03:23:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:40.933 03:23:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.933 03:23:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:40.933 03:23:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.933 03:23:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:40.933 03:23:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:40.933 03:23:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:20:40.933 03:23:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:40.933 03:23:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.933 03:23:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:40.933 03:23:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.933 03:23:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:40.933 03:23:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.933 03:23:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:40.933 ************************************ 00:20:40.933 END TEST fio_dif_1_multi_subsystems 00:20:40.933 ************************************ 00:20:40.933 03:23:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.933 00:20:40.933 real 0m11.171s 00:20:40.933 user 0m19.402s 00:20:40.933 sys 0m1.463s 00:20:40.933 03:23:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:40.933 03:23:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:40.933 03:23:23 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:40.933 03:23:23 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:40.933 03:23:23 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:40.933 03:23:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:40.933 ************************************ 00:20:40.933 START TEST fio_dif_rand_params 00:20:40.933 ************************************ 00:20:40.933 03:23:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:20:40.933 03:23:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:20:40.933 03:23:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:40.933 03:23:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:20:40.933 03:23:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:20:40.933 03:23:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:20:40.933 03:23:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:20:40.933 03:23:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:20:40.933 03:23:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:20:40.933 03:23:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:40.933 03:23:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:40.933 03:23:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:40.933 03:23:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:40.933 03:23:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:40.933 03:23:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.933 03:23:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:40.933 bdev_null0 00:20:40.933 03:23:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.933 03:23:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:40.933 03:23:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.933 03:23:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:40.934 [2024-10-09 03:23:23.897445] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:40.934 { 00:20:40.934 "params": { 00:20:40.934 "name": "Nvme$subsystem", 00:20:40.934 "trtype": "$TEST_TRANSPORT", 00:20:40.934 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.934 "adrfam": "ipv4", 00:20:40.934 "trsvcid": "$NVMF_PORT", 00:20:40.934 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.934 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.934 "hdgst": ${hdgst:-false}, 00:20:40.934 "ddgst": ${ddgst:-false} 00:20:40.934 }, 00:20:40.934 "method": "bdev_nvme_attach_controller" 00:20:40.934 } 00:20:40.934 EOF 00:20:40.934 )") 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:20:40.934 "params": { 00:20:40.934 "name": "Nvme0", 00:20:40.934 "trtype": "tcp", 00:20:40.934 "traddr": "10.0.0.3", 00:20:40.934 "adrfam": "ipv4", 00:20:40.934 "trsvcid": "4420", 00:20:40.934 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:40.934 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:40.934 "hdgst": false, 00:20:40.934 "ddgst": false 00:20:40.934 }, 00:20:40.934 "method": "bdev_nvme_attach_controller" 00:20:40.934 }' 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:40.934 03:23:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:40.934 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:40.934 ... 00:20:40.934 fio-3.35 00:20:40.934 Starting 3 threads 00:20:47.504 00:20:47.504 filename0: (groupid=0, jobs=1): err= 0: pid=83453: Wed Oct 9 03:23:29 2024 00:20:47.504 read: IOPS=269, BW=33.7MiB/s (35.3MB/s)(169MiB/5009msec) 00:20:47.504 slat (nsec): min=5900, max=81081, avg=20693.21, stdev=11691.31 00:20:47.504 clat (usec): min=4836, max=12558, avg=11085.53, stdev=672.65 00:20:47.504 lat (usec): min=4845, max=12588, avg=11106.22, stdev=671.97 00:20:47.504 clat percentiles (usec): 00:20:47.504 | 1.00th=[10159], 5.00th=[10290], 10.00th=[10290], 20.00th=[10552], 00:20:47.504 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:20:47.504 | 70.00th=[11469], 80.00th=[11731], 90.00th=[11994], 95.00th=[12125], 00:20:47.504 | 99.00th=[12256], 99.50th=[12518], 99.90th=[12518], 99.95th=[12518], 00:20:47.504 | 99.99th=[12518] 00:20:47.504 bw ( KiB/s): min=32256, max=36096, per=33.37%, avg=34483.20, stdev=1170.34, samples=10 00:20:47.504 iops : min= 252, max= 282, avg=269.40, stdev= 9.14, samples=10 00:20:47.504 lat (msec) : 10=0.67%, 20=99.33% 00:20:47.504 cpu : usr=94.37%, sys=5.11%, ctx=12, majf=0, minf=0 00:20:47.504 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:47.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.504 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.504 issued rwts: total=1350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.504 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:47.504 filename0: (groupid=0, jobs=1): err= 0: pid=83454: Wed Oct 9 03:23:29 2024 00:20:47.504 read: IOPS=269, BW=33.6MiB/s (35.3MB/s)(168MiB/5004msec) 00:20:47.504 slat (usec): min=6, max=110, avg=25.06, stdev=11.89 00:20:47.504 clat (usec): min=8384, max=13559, avg=11088.82, stdev=638.21 00:20:47.504 lat (usec): min=8392, max=13589, avg=11113.87, stdev=639.12 00:20:47.504 clat percentiles (usec): 00:20:47.504 | 1.00th=[10028], 5.00th=[10290], 10.00th=[10290], 20.00th=[10421], 00:20:47.504 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:20:47.504 | 70.00th=[11469], 80.00th=[11731], 90.00th=[11994], 95.00th=[12125], 00:20:47.504 | 99.00th=[12518], 99.50th=[12649], 99.90th=[13566], 99.95th=[13566], 00:20:47.504 | 99.99th=[13566] 00:20:47.504 bw ( KiB/s): min=32256, max=36096, per=33.11%, avg=34218.67, stdev=1024.00, samples=9 00:20:47.504 iops : min= 252, max= 282, avg=267.33, stdev= 8.00, samples=9 00:20:47.504 lat (msec) : 10=0.97%, 20=99.03% 00:20:47.504 cpu : usr=94.40%, sys=5.04%, ctx=7, majf=0, minf=0 00:20:47.504 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:47.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.504 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.504 issued rwts: total=1347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.504 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:47.504 filename0: (groupid=0, jobs=1): err= 0: pid=83455: Wed Oct 9 03:23:29 2024 00:20:47.504 read: IOPS=269, BW=33.6MiB/s (35.3MB/s)(168MiB/5004msec) 00:20:47.504 slat (usec): min=6, max=109, avg=24.58, stdev=13.07 00:20:47.504 clat (usec): min=9945, max=12638, avg=11089.59, stdev=603.68 00:20:47.504 lat (usec): min=9968, max=12656, avg=11114.17, stdev=604.01 00:20:47.504 clat percentiles (usec): 00:20:47.504 | 1.00th=[10159], 5.00th=[10290], 10.00th=[10290], 20.00th=[10552], 00:20:47.504 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:20:47.504 | 70.00th=[11469], 80.00th=[11731], 90.00th=[11994], 95.00th=[12125], 00:20:47.504 | 99.00th=[12387], 99.50th=[12387], 99.90th=[12649], 99.95th=[12649], 00:20:47.504 | 99.99th=[12649] 00:20:47.504 bw ( KiB/s): min=32256, max=36096, per=33.11%, avg=34218.67, stdev=1024.00, samples=9 00:20:47.504 iops : min= 252, max= 282, avg=267.33, stdev= 8.00, samples=9 00:20:47.504 lat (msec) : 10=0.22%, 20=99.78% 00:20:47.504 cpu : usr=95.06%, sys=4.32%, ctx=9, majf=0, minf=0 00:20:47.504 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:47.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.504 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.504 issued rwts: total=1347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.504 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:47.504 00:20:47.504 Run status group 0 (all jobs): 00:20:47.504 READ: bw=101MiB/s (106MB/s), 33.6MiB/s-33.7MiB/s (35.3MB/s-35.3MB/s), io=506MiB (530MB), run=5004-5009msec 00:20:47.504 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:47.504 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:47.504 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:47.504 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:47.504 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:47.504 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:47.504 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.504 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.504 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.504 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:47.504 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.504 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.504 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.504 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:20:47.504 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.505 bdev_null0 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.505 [2024-10-09 03:23:29.939303] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.505 bdev_null1 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.505 bdev_null2 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.505 03:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:47.505 { 00:20:47.505 "params": { 00:20:47.505 "name": "Nvme$subsystem", 00:20:47.505 "trtype": "$TEST_TRANSPORT", 00:20:47.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.505 "adrfam": "ipv4", 00:20:47.505 "trsvcid": "$NVMF_PORT", 00:20:47.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.505 "hdgst": ${hdgst:-false}, 00:20:47.505 "ddgst": ${ddgst:-false} 00:20:47.505 }, 00:20:47.505 "method": "bdev_nvme_attach_controller" 00:20:47.505 } 00:20:47.505 EOF 00:20:47.505 )") 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:47.505 { 00:20:47.505 "params": { 00:20:47.505 "name": "Nvme$subsystem", 00:20:47.505 "trtype": "$TEST_TRANSPORT", 00:20:47.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.505 "adrfam": "ipv4", 00:20:47.505 "trsvcid": "$NVMF_PORT", 00:20:47.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.505 "hdgst": ${hdgst:-false}, 00:20:47.505 "ddgst": ${ddgst:-false} 00:20:47.505 }, 00:20:47.505 "method": "bdev_nvme_attach_controller" 00:20:47.505 } 00:20:47.505 EOF 00:20:47.505 )") 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:47.505 { 00:20:47.505 "params": { 00:20:47.505 "name": "Nvme$subsystem", 00:20:47.505 "trtype": "$TEST_TRANSPORT", 00:20:47.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.505 "adrfam": "ipv4", 00:20:47.505 "trsvcid": "$NVMF_PORT", 00:20:47.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.505 "hdgst": ${hdgst:-false}, 00:20:47.505 "ddgst": ${ddgst:-false} 00:20:47.505 }, 00:20:47.505 "method": "bdev_nvme_attach_controller" 00:20:47.505 } 00:20:47.505 EOF 00:20:47.505 )") 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:20:47.505 03:23:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:20:47.506 03:23:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:20:47.506 "params": { 00:20:47.506 "name": "Nvme0", 00:20:47.506 "trtype": "tcp", 00:20:47.506 "traddr": "10.0.0.3", 00:20:47.506 "adrfam": "ipv4", 00:20:47.506 "trsvcid": "4420", 00:20:47.506 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:47.506 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:47.506 "hdgst": false, 00:20:47.506 "ddgst": false 00:20:47.506 }, 00:20:47.506 "method": "bdev_nvme_attach_controller" 00:20:47.506 },{ 00:20:47.506 "params": { 00:20:47.506 "name": "Nvme1", 00:20:47.506 "trtype": "tcp", 00:20:47.506 "traddr": "10.0.0.3", 00:20:47.506 "adrfam": "ipv4", 00:20:47.506 "trsvcid": "4420", 00:20:47.506 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.506 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:47.506 "hdgst": false, 00:20:47.506 "ddgst": false 00:20:47.506 }, 00:20:47.506 "method": "bdev_nvme_attach_controller" 00:20:47.506 },{ 00:20:47.506 "params": { 00:20:47.506 "name": "Nvme2", 00:20:47.506 "trtype": "tcp", 00:20:47.506 "traddr": "10.0.0.3", 00:20:47.506 "adrfam": "ipv4", 00:20:47.506 "trsvcid": "4420", 00:20:47.506 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:47.506 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:47.506 "hdgst": false, 00:20:47.506 "ddgst": false 00:20:47.506 }, 00:20:47.506 "method": "bdev_nvme_attach_controller" 00:20:47.506 }' 00:20:47.506 03:23:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:47.506 03:23:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:47.506 03:23:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:47.506 03:23:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:47.506 03:23:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:47.506 03:23:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:47.506 03:23:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:47.506 03:23:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:47.506 03:23:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:47.506 03:23:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:47.506 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:47.506 ... 00:20:47.506 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:47.506 ... 00:20:47.506 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:47.506 ... 00:20:47.506 fio-3.35 00:20:47.506 Starting 24 threads 00:20:59.726 00:20:59.726 filename0: (groupid=0, jobs=1): err= 0: pid=83550: Wed Oct 9 03:23:41 2024 00:20:59.726 read: IOPS=209, BW=837KiB/s (857kB/s)(8376KiB/10008msec) 00:20:59.726 slat (usec): min=4, max=8052, avg=36.52, stdev=361.97 00:20:59.726 clat (msec): min=13, max=155, avg=76.24, stdev=23.08 00:20:59.726 lat (msec): min=13, max=155, avg=76.28, stdev=23.10 00:20:59.726 clat percentiles (msec): 00:20:59.726 | 1.00th=[ 40], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 53], 00:20:59.726 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 79], 00:20:59.726 | 70.00th=[ 86], 80.00th=[ 99], 90.00th=[ 108], 95.00th=[ 112], 00:20:59.726 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 157], 00:20:59.726 | 99.99th=[ 157] 00:20:59.726 bw ( KiB/s): min= 528, max= 1024, per=4.10%, avg=823.84, stdev=145.31, samples=19 00:20:59.726 iops : min= 132, max= 256, avg=205.95, stdev=36.35, samples=19 00:20:59.727 lat (msec) : 20=0.29%, 50=16.62%, 100=63.75%, 250=19.34% 00:20:59.727 cpu : usr=37.96%, sys=1.59%, ctx=1027, majf=0, minf=9 00:20:59.727 IO depths : 1=0.1%, 2=1.8%, 4=7.2%, 8=76.1%, 16=14.8%, 32=0.0%, >=64=0.0% 00:20:59.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.727 complete : 0=0.0%, 4=88.9%, 8=9.6%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.727 issued rwts: total=2094,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.727 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:59.727 filename0: (groupid=0, jobs=1): err= 0: pid=83551: Wed Oct 9 03:23:41 2024 00:20:59.727 read: IOPS=212, BW=851KiB/s (871kB/s)(8544KiB/10040msec) 00:20:59.727 slat (usec): min=5, max=8022, avg=24.63, stdev=229.66 00:20:59.727 clat (msec): min=7, max=132, avg=75.00, stdev=21.79 00:20:59.727 lat (msec): min=7, max=132, avg=75.03, stdev=21.79 00:20:59.727 clat percentiles (msec): 00:20:59.727 | 1.00th=[ 8], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 56], 00:20:59.727 | 30.00th=[ 65], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 79], 00:20:59.727 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 111], 00:20:59.727 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 124], 99.95th=[ 131], 00:20:59.727 | 99.99th=[ 133] 00:20:59.727 bw ( KiB/s): min= 640, max= 1240, per=4.24%, avg=850.05, stdev=123.89, samples=20 00:20:59.727 iops : min= 160, max= 310, avg=212.45, stdev=31.03, samples=20 00:20:59.727 lat (msec) : 10=1.50%, 50=10.91%, 100=71.77%, 250=15.82% 00:20:59.727 cpu : usr=44.01%, sys=1.88%, ctx=1293, majf=0, minf=9 00:20:59.727 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=81.6%, 16=16.4%, 32=0.0%, >=64=0.0% 00:20:59.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.727 complete : 0=0.0%, 4=87.9%, 8=11.8%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.727 issued rwts: total=2136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.727 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:59.727 filename0: (groupid=0, jobs=1): err= 0: pid=83552: Wed Oct 9 03:23:41 2024 00:20:59.727 read: IOPS=219, BW=880KiB/s (901kB/s)(8800KiB/10002msec) 00:20:59.727 slat (usec): min=3, max=8050, avg=38.60, stdev=372.15 00:20:59.727 clat (usec): min=1684, max=150786, avg=72568.29, stdev=23408.30 00:20:59.727 lat (usec): min=1692, max=150796, avg=72606.89, stdev=23403.36 00:20:59.727 clat percentiles (msec): 00:20:59.727 | 1.00th=[ 9], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 52], 00:20:59.727 | 30.00th=[ 59], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 75], 00:20:59.727 | 70.00th=[ 82], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 113], 00:20:59.727 | 99.00th=[ 132], 99.50th=[ 133], 99.90th=[ 146], 99.95th=[ 150], 00:20:59.727 | 99.99th=[ 150] 00:20:59.727 bw ( KiB/s): min= 528, max= 992, per=4.27%, avg=857.26, stdev=123.34, samples=19 00:20:59.727 iops : min= 132, max= 248, avg=214.32, stdev=30.83, samples=19 00:20:59.727 lat (msec) : 2=0.18%, 4=0.73%, 10=0.27%, 20=0.27%, 50=16.95% 00:20:59.727 lat (msec) : 100=66.64%, 250=14.95% 00:20:59.727 cpu : usr=41.97%, sys=1.87%, ctx=1065, majf=0, minf=9 00:20:59.727 IO depths : 1=0.1%, 2=0.5%, 4=2.1%, 8=81.8%, 16=15.5%, 32=0.0%, >=64=0.0% 00:20:59.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.727 complete : 0=0.0%, 4=87.3%, 8=12.2%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.727 issued rwts: total=2200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.727 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:59.727 filename0: (groupid=0, jobs=1): err= 0: pid=83553: Wed Oct 9 03:23:41 2024 00:20:59.727 read: IOPS=211, BW=845KiB/s (865kB/s)(8452KiB/10006msec) 00:20:59.727 slat (usec): min=4, max=9023, avg=31.89, stdev=295.53 00:20:59.727 clat (msec): min=9, max=150, avg=75.61, stdev=23.15 00:20:59.727 lat (msec): min=9, max=150, avg=75.64, stdev=23.15 00:20:59.727 clat percentiles (msec): 00:20:59.727 | 1.00th=[ 31], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 53], 00:20:59.727 | 30.00th=[ 62], 40.00th=[ 70], 50.00th=[ 73], 60.00th=[ 79], 00:20:59.727 | 70.00th=[ 86], 80.00th=[ 100], 90.00th=[ 109], 95.00th=[ 114], 00:20:59.727 | 99.00th=[ 129], 99.50th=[ 142], 99.90th=[ 142], 99.95th=[ 150], 00:20:59.727 | 99.99th=[ 150] 00:20:59.727 bw ( KiB/s): min= 528, max= 1024, per=4.13%, avg=829.05, stdev=139.06, samples=19 00:20:59.727 iops : min= 132, max= 256, avg=207.26, stdev=34.76, samples=19 00:20:59.727 lat (msec) : 10=0.33%, 20=0.28%, 50=14.39%, 100=65.78%, 250=19.21% 00:20:59.727 cpu : usr=43.53%, sys=1.62%, ctx=1431, majf=0, minf=9 00:20:59.727 IO depths : 1=0.1%, 2=1.4%, 4=5.6%, 8=77.9%, 16=15.0%, 32=0.0%, >=64=0.0% 00:20:59.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.727 complete : 0=0.0%, 4=88.4%, 8=10.4%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.727 issued rwts: total=2113,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.727 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:59.727 filename0: (groupid=0, jobs=1): err= 0: pid=83554: Wed Oct 9 03:23:41 2024 00:20:59.727 read: IOPS=210, BW=844KiB/s (864kB/s)(8472KiB/10041msec) 00:20:59.727 slat (usec): min=3, max=9048, avg=29.64, stdev=284.79 00:20:59.727 clat (msec): min=32, max=158, avg=75.72, stdev=21.19 00:20:59.727 lat (msec): min=32, max=158, avg=75.75, stdev=21.19 00:20:59.727 clat percentiles (msec): 00:20:59.727 | 1.00th=[ 40], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 55], 00:20:59.727 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 79], 00:20:59.727 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 110], 00:20:59.727 | 99.00th=[ 124], 99.50th=[ 136], 99.90th=[ 144], 99.95th=[ 159], 00:20:59.727 | 99.99th=[ 159] 00:20:59.727 bw ( KiB/s): min= 664, max= 976, per=4.19%, avg=840.70, stdev=95.83, samples=20 00:20:59.727 iops : min= 166, max= 244, avg=210.15, stdev=23.97, samples=20 00:20:59.727 lat (msec) : 50=16.01%, 100=69.12%, 250=14.87% 00:20:59.727 cpu : usr=32.58%, sys=1.22%, ctx=1112, majf=0, minf=9 00:20:59.727 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.9%, 16=16.4%, 32=0.0%, >=64=0.0% 00:20:59.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.727 complete : 0=0.0%, 4=87.4%, 8=12.4%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.727 issued rwts: total=2118,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.727 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:59.727 filename0: (groupid=0, jobs=1): err= 0: pid=83555: Wed Oct 9 03:23:41 2024 00:20:59.727 read: IOPS=198, BW=794KiB/s (813kB/s)(7976KiB/10042msec) 00:20:59.727 slat (usec): min=5, max=8046, avg=34.45, stdev=317.31 00:20:59.727 clat (msec): min=34, max=162, avg=80.29, stdev=22.39 00:20:59.727 lat (msec): min=34, max=162, avg=80.32, stdev=22.39 00:20:59.727 clat percentiles (msec): 00:20:59.727 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 62], 00:20:59.727 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 84], 00:20:59.727 | 70.00th=[ 93], 80.00th=[ 103], 90.00th=[ 110], 95.00th=[ 117], 00:20:59.727 | 99.00th=[ 138], 99.50th=[ 144], 99.90th=[ 163], 99.95th=[ 163], 00:20:59.727 | 99.99th=[ 163] 00:20:59.727 bw ( KiB/s): min= 512, max= 1024, per=3.95%, avg=793.90, stdev=137.68, samples=20 00:20:59.727 iops : min= 128, max= 256, avg=198.45, stdev=34.45, samples=20 00:20:59.727 lat (msec) : 50=10.03%, 100=68.86%, 250=21.11% 00:20:59.727 cpu : usr=34.73%, sys=1.27%, ctx=1104, majf=0, minf=10 00:20:59.727 IO depths : 1=0.1%, 2=1.7%, 4=6.8%, 8=75.7%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:59.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.727 complete : 0=0.0%, 4=89.5%, 8=9.0%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.727 issued rwts: total=1994,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.727 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:59.727 filename0: (groupid=0, jobs=1): err= 0: pid=83556: Wed Oct 9 03:23:41 2024 00:20:59.727 read: IOPS=214, BW=857KiB/s (878kB/s)(8592KiB/10020msec) 00:20:59.727 slat (usec): min=4, max=8045, avg=47.13, stdev=469.16 00:20:59.727 clat (msec): min=27, max=140, avg=74.43, stdev=20.31 00:20:59.727 lat (msec): min=27, max=140, avg=74.48, stdev=20.32 00:20:59.727 clat percentiles (msec): 00:20:59.727 | 1.00th=[ 39], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 54], 00:20:59.727 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 78], 00:20:59.727 | 70.00th=[ 84], 80.00th=[ 95], 90.00th=[ 107], 95.00th=[ 110], 00:20:59.727 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 130], 99.95th=[ 130], 00:20:59.727 | 99.99th=[ 142] 00:20:59.727 bw ( KiB/s): min= 664, max= 1024, per=4.24%, avg=850.68, stdev=86.86, samples=19 00:20:59.727 iops : min= 166, max= 256, avg=212.63, stdev=21.75, samples=19 00:20:59.727 lat (msec) : 50=16.85%, 100=68.99%, 250=14.15% 00:20:59.727 cpu : usr=33.65%, sys=1.22%, ctx=1062, majf=0, minf=9 00:20:59.727 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:20:59.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.727 complete : 0=0.0%, 4=87.2%, 8=12.7%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.727 issued rwts: total=2148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.727 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:59.727 filename0: (groupid=0, jobs=1): err= 0: pid=83557: Wed Oct 9 03:23:41 2024 00:20:59.727 read: IOPS=215, BW=862KiB/s (883kB/s)(8640KiB/10023msec) 00:20:59.727 slat (usec): min=4, max=8025, avg=27.29, stdev=211.53 00:20:59.727 clat (msec): min=34, max=136, avg=74.09, stdev=20.12 00:20:59.727 lat (msec): min=34, max=136, avg=74.12, stdev=20.12 00:20:59.727 clat percentiles (msec): 00:20:59.727 | 1.00th=[ 40], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 53], 00:20:59.727 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 77], 00:20:59.727 | 70.00th=[ 83], 80.00th=[ 93], 90.00th=[ 107], 95.00th=[ 110], 00:20:59.727 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 125], 99.95th=[ 136], 00:20:59.727 | 99.99th=[ 136] 00:20:59.727 bw ( KiB/s): min= 688, max= 1072, per=4.28%, avg=858.75, stdev=96.87, samples=20 00:20:59.727 iops : min= 172, max= 268, avg=214.65, stdev=24.21, samples=20 00:20:59.727 lat (msec) : 50=15.37%, 100=70.46%, 250=14.17% 00:20:59.727 cpu : usr=38.22%, sys=1.72%, ctx=1356, majf=0, minf=9 00:20:59.727 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.7%, 16=15.9%, 32=0.0%, >=64=0.0% 00:20:59.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.727 complete : 0=0.0%, 4=87.2%, 8=12.5%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.727 issued rwts: total=2160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.727 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:59.727 filename1: (groupid=0, jobs=1): err= 0: pid=83558: Wed Oct 9 03:23:41 2024 00:20:59.727 read: IOPS=208, BW=833KiB/s (853kB/s)(8360KiB/10039msec) 00:20:59.727 slat (usec): min=5, max=12029, avg=29.33, stdev=327.20 00:20:59.727 clat (msec): min=31, max=138, avg=76.70, stdev=20.34 00:20:59.727 lat (msec): min=31, max=138, avg=76.73, stdev=20.34 00:20:59.727 clat percentiles (msec): 00:20:59.728 | 1.00th=[ 42], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 56], 00:20:59.728 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 81], 00:20:59.728 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 111], 00:20:59.728 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 136], 99.95th=[ 136], 00:20:59.728 | 99.99th=[ 138] 00:20:59.728 bw ( KiB/s): min= 656, max= 1024, per=4.14%, avg=830.85, stdev=99.18, samples=20 00:20:59.728 iops : min= 164, max= 256, avg=207.70, stdev=24.81, samples=20 00:20:59.728 lat (msec) : 50=10.96%, 100=72.63%, 250=16.41% 00:20:59.728 cpu : usr=38.64%, sys=1.68%, ctx=1281, majf=0, minf=9 00:20:59.728 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.4%, 16=16.6%, 32=0.0%, >=64=0.0% 00:20:59.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.728 complete : 0=0.0%, 4=87.7%, 8=12.1%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.728 issued rwts: total=2090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.728 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:59.728 filename1: (groupid=0, jobs=1): err= 0: pid=83559: Wed Oct 9 03:23:41 2024 00:20:59.728 read: IOPS=216, BW=865KiB/s (886kB/s)(8696KiB/10056msec) 00:20:59.728 slat (usec): min=3, max=8042, avg=39.77, stdev=393.16 00:20:59.728 clat (msec): min=2, max=142, avg=73.74, stdev=23.33 00:20:59.728 lat (msec): min=2, max=142, avg=73.78, stdev=23.33 00:20:59.728 clat percentiles (msec): 00:20:59.728 | 1.00th=[ 4], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 56], 00:20:59.728 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 79], 00:20:59.728 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 111], 00:20:59.728 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 136], 99.95th=[ 140], 00:20:59.728 | 99.99th=[ 142] 00:20:59.728 bw ( KiB/s): min= 712, max= 1576, per=4.30%, avg=863.20, stdev=180.01, samples=20 00:20:59.728 iops : min= 178, max= 394, avg=215.80, stdev=45.00, samples=20 00:20:59.728 lat (msec) : 4=1.38%, 10=0.92%, 20=0.64%, 50=12.56%, 100=69.37% 00:20:59.728 lat (msec) : 250=15.13% 00:20:59.728 cpu : usr=34.50%, sys=1.36%, ctx=925, majf=0, minf=9 00:20:59.728 IO depths : 1=0.2%, 2=0.5%, 4=1.2%, 8=81.7%, 16=16.4%, 32=0.0%, >=64=0.0% 00:20:59.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.728 complete : 0=0.0%, 4=87.8%, 8=12.0%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.728 issued rwts: total=2174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.728 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:59.728 filename1: (groupid=0, jobs=1): err= 0: pid=83560: Wed Oct 9 03:23:41 2024 00:20:59.728 read: IOPS=205, BW=821KiB/s (841kB/s)(8232KiB/10024msec) 00:20:59.728 slat (usec): min=3, max=8029, avg=25.41, stdev=197.77 00:20:59.728 clat (msec): min=28, max=146, avg=77.76, stdev=22.09 00:20:59.728 lat (msec): min=28, max=146, avg=77.78, stdev=22.09 00:20:59.728 clat percentiles (msec): 00:20:59.728 | 1.00th=[ 38], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 58], 00:20:59.728 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 83], 00:20:59.728 | 70.00th=[ 88], 80.00th=[ 100], 90.00th=[ 108], 95.00th=[ 113], 00:20:59.728 | 99.00th=[ 130], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 148], 00:20:59.728 | 99.99th=[ 148] 00:20:59.728 bw ( KiB/s): min= 528, max= 992, per=4.05%, avg=812.21, stdev=135.03, samples=19 00:20:59.728 iops : min= 132, max= 248, avg=203.05, stdev=33.76, samples=19 00:20:59.728 lat (msec) : 50=15.74%, 100=64.77%, 250=19.48% 00:20:59.728 cpu : usr=36.16%, sys=1.51%, ctx=1136, majf=0, minf=9 00:20:59.728 IO depths : 1=0.1%, 2=1.8%, 4=7.3%, 8=75.8%, 16=15.0%, 32=0.0%, >=64=0.0% 00:20:59.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.728 complete : 0=0.0%, 4=89.1%, 8=9.3%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.728 issued rwts: total=2058,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.728 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:59.728 filename1: (groupid=0, jobs=1): err= 0: pid=83561: Wed Oct 9 03:23:41 2024 00:20:59.728 read: IOPS=215, BW=864KiB/s (884kB/s)(8676KiB/10046msec) 00:20:59.728 slat (usec): min=3, max=4028, avg=18.59, stdev=89.32 00:20:59.728 clat (msec): min=8, max=151, avg=73.91, stdev=21.87 00:20:59.728 lat (msec): min=8, max=151, avg=73.93, stdev=21.87 00:20:59.728 clat percentiles (msec): 00:20:59.728 | 1.00th=[ 11], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 54], 00:20:59.728 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 73], 60.00th=[ 78], 00:20:59.728 | 70.00th=[ 83], 80.00th=[ 95], 90.00th=[ 107], 95.00th=[ 111], 00:20:59.728 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 148], 99.95th=[ 148], 00:20:59.728 | 99.99th=[ 153] 00:20:59.728 bw ( KiB/s): min= 675, max= 1130, per=4.30%, avg=863.05, stdev=105.53, samples=20 00:20:59.728 iops : min= 168, max= 282, avg=215.65, stdev=26.43, samples=20 00:20:59.728 lat (msec) : 10=0.65%, 20=0.74%, 50=14.57%, 100=68.37%, 250=15.68% 00:20:59.728 cpu : usr=44.63%, sys=2.15%, ctx=1187, majf=0, minf=9 00:20:59.728 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=81.8%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:59.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.728 complete : 0=0.0%, 4=87.6%, 8=12.1%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.728 issued rwts: total=2169,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.728 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:59.728 filename1: (groupid=0, jobs=1): err= 0: pid=83562: Wed Oct 9 03:23:41 2024 00:20:59.728 read: IOPS=200, BW=803KiB/s (822kB/s)(8052KiB/10030msec) 00:20:59.728 slat (usec): min=4, max=8049, avg=28.20, stdev=261.57 00:20:59.728 clat (msec): min=35, max=168, avg=79.56, stdev=21.82 00:20:59.728 lat (msec): min=35, max=168, avg=79.58, stdev=21.81 00:20:59.728 clat percentiles (msec): 00:20:59.728 | 1.00th=[ 45], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 61], 00:20:59.728 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 84], 00:20:59.728 | 70.00th=[ 95], 80.00th=[ 105], 90.00th=[ 111], 95.00th=[ 116], 00:20:59.728 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 161], 99.95th=[ 169], 00:20:59.728 | 99.99th=[ 169] 00:20:59.728 bw ( KiB/s): min= 512, max= 1000, per=3.98%, avg=799.60, stdev=146.26, samples=20 00:20:59.728 iops : min= 128, max= 250, avg=199.85, stdev=36.56, samples=20 00:20:59.728 lat (msec) : 50=12.42%, 100=64.08%, 250=23.50% 00:20:59.728 cpu : usr=35.26%, sys=1.49%, ctx=1100, majf=0, minf=9 00:20:59.728 IO depths : 1=0.1%, 2=2.1%, 4=8.3%, 8=74.5%, 16=15.1%, 32=0.0%, >=64=0.0% 00:20:59.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.728 complete : 0=0.0%, 4=89.5%, 8=8.7%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.728 issued rwts: total=2013,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.728 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:59.728 filename1: (groupid=0, jobs=1): err= 0: pid=83563: Wed Oct 9 03:23:41 2024 00:20:59.728 read: IOPS=205, BW=823KiB/s (842kB/s)(8236KiB/10012msec) 00:20:59.728 slat (usec): min=4, max=9034, avg=44.52, stdev=414.12 00:20:59.728 clat (msec): min=16, max=159, avg=77.53, stdev=25.11 00:20:59.728 lat (msec): min=16, max=159, avg=77.57, stdev=25.11 00:20:59.728 clat percentiles (msec): 00:20:59.728 | 1.00th=[ 40], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 54], 00:20:59.728 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 74], 60.00th=[ 79], 00:20:59.728 | 70.00th=[ 85], 80.00th=[ 104], 90.00th=[ 112], 95.00th=[ 129], 00:20:59.728 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 159], 99.95th=[ 161], 00:20:59.728 | 99.99th=[ 161] 00:20:59.728 bw ( KiB/s): min= 512, max= 976, per=4.03%, avg=808.00, stdev=156.61, samples=19 00:20:59.728 iops : min= 128, max= 244, avg=202.00, stdev=39.15, samples=19 00:20:59.728 lat (msec) : 20=0.29%, 50=15.40%, 100=62.02%, 250=22.29% 00:20:59.728 cpu : usr=42.58%, sys=1.98%, ctx=1320, majf=0, minf=9 00:20:59.728 IO depths : 1=0.1%, 2=1.9%, 4=7.7%, 8=75.6%, 16=14.7%, 32=0.0%, >=64=0.0% 00:20:59.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.728 complete : 0=0.0%, 4=89.0%, 8=9.4%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.728 issued rwts: total=2059,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.728 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:59.728 filename1: (groupid=0, jobs=1): err= 0: pid=83564: Wed Oct 9 03:23:41 2024 00:20:59.728 read: IOPS=215, BW=864KiB/s (885kB/s)(8656KiB/10020msec) 00:20:59.728 slat (usec): min=3, max=8029, avg=25.19, stdev=192.88 00:20:59.728 clat (msec): min=23, max=144, avg=73.94, stdev=21.16 00:20:59.728 lat (msec): min=23, max=144, avg=73.96, stdev=21.16 00:20:59.728 clat percentiles (msec): 00:20:59.728 | 1.00th=[ 38], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 53], 00:20:59.728 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 75], 00:20:59.728 | 70.00th=[ 83], 80.00th=[ 94], 90.00th=[ 108], 95.00th=[ 112], 00:20:59.728 | 99.00th=[ 125], 99.50th=[ 129], 99.90th=[ 140], 99.95th=[ 144], 00:20:59.728 | 99.99th=[ 144] 00:20:59.728 bw ( KiB/s): min= 648, max= 1024, per=4.26%, avg=854.21, stdev=99.06, samples=19 00:20:59.728 iops : min= 162, max= 256, avg=213.53, stdev=24.76, samples=19 00:20:59.728 lat (msec) : 50=17.98%, 100=67.65%, 250=14.37% 00:20:59.728 cpu : usr=34.57%, sys=1.56%, ctx=1036, majf=0, minf=9 00:20:59.728 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=81.8%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:59.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.728 complete : 0=0.0%, 4=87.3%, 8=12.2%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.728 issued rwts: total=2164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.728 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:59.728 filename1: (groupid=0, jobs=1): err= 0: pid=83565: Wed Oct 9 03:23:41 2024 00:20:59.728 read: IOPS=215, BW=863KiB/s (883kB/s)(8644KiB/10021msec) 00:20:59.728 slat (usec): min=5, max=8043, avg=40.13, stdev=375.57 00:20:59.728 clat (msec): min=26, max=144, avg=73.95, stdev=20.59 00:20:59.728 lat (msec): min=26, max=144, avg=73.99, stdev=20.58 00:20:59.728 clat percentiles (msec): 00:20:59.728 | 1.00th=[ 37], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 53], 00:20:59.728 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 78], 00:20:59.728 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 109], 00:20:59.728 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 127], 99.95th=[ 127], 00:20:59.728 | 99.99th=[ 144] 00:20:59.728 bw ( KiB/s): min= 656, max= 1000, per=4.26%, avg=854.47, stdev=104.51, samples=19 00:20:59.728 iops : min= 164, max= 250, avg=213.58, stdev=26.14, samples=19 00:20:59.728 lat (msec) : 50=16.75%, 100=68.53%, 250=14.72% 00:20:59.728 cpu : usr=38.51%, sys=1.59%, ctx=1135, majf=0, minf=9 00:20:59.728 IO depths : 1=0.1%, 2=0.2%, 4=1.0%, 8=82.8%, 16=15.9%, 32=0.0%, >=64=0.0% 00:20:59.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.728 complete : 0=0.0%, 4=87.2%, 8=12.6%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.728 issued rwts: total=2161,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.728 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:59.729 filename2: (groupid=0, jobs=1): err= 0: pid=83566: Wed Oct 9 03:23:41 2024 00:20:59.729 read: IOPS=196, BW=787KiB/s (806kB/s)(7904KiB/10045msec) 00:20:59.729 slat (usec): min=4, max=8044, avg=33.38, stdev=299.19 00:20:59.729 clat (msec): min=33, max=149, avg=81.04, stdev=22.22 00:20:59.729 lat (msec): min=33, max=149, avg=81.08, stdev=22.22 00:20:59.729 clat percentiles (msec): 00:20:59.729 | 1.00th=[ 39], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 64], 00:20:59.729 | 30.00th=[ 70], 40.00th=[ 73], 50.00th=[ 79], 60.00th=[ 84], 00:20:59.729 | 70.00th=[ 95], 80.00th=[ 104], 90.00th=[ 110], 95.00th=[ 114], 00:20:59.729 | 99.00th=[ 142], 99.50th=[ 150], 99.90th=[ 150], 99.95th=[ 150], 00:20:59.729 | 99.99th=[ 150] 00:20:59.729 bw ( KiB/s): min= 512, max= 1024, per=3.92%, avg=786.30, stdev=146.76, samples=20 00:20:59.729 iops : min= 128, max= 256, avg=196.55, stdev=36.72, samples=20 00:20:59.729 lat (msec) : 50=9.56%, 100=66.95%, 250=23.48% 00:20:59.729 cpu : usr=39.00%, sys=1.36%, ctx=1051, majf=0, minf=9 00:20:59.729 IO depths : 1=0.1%, 2=2.6%, 4=10.5%, 8=72.1%, 16=14.8%, 32=0.0%, >=64=0.0% 00:20:59.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.729 complete : 0=0.0%, 4=90.2%, 8=7.5%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.729 issued rwts: total=1976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.729 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:59.729 filename2: (groupid=0, jobs=1): err= 0: pid=83567: Wed Oct 9 03:23:41 2024 00:20:59.729 read: IOPS=201, BW=807KiB/s (826kB/s)(8092KiB/10029msec) 00:20:59.729 slat (usec): min=4, max=8043, avg=31.49, stdev=296.39 00:20:59.729 clat (msec): min=28, max=155, avg=79.13, stdev=23.44 00:20:59.729 lat (msec): min=29, max=155, avg=79.16, stdev=23.44 00:20:59.729 clat percentiles (msec): 00:20:59.729 | 1.00th=[ 40], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 59], 00:20:59.729 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 82], 00:20:59.729 | 70.00th=[ 92], 80.00th=[ 103], 90.00th=[ 109], 95.00th=[ 118], 00:20:59.729 | 99.00th=[ 140], 99.50th=[ 142], 99.90th=[ 155], 99.95th=[ 157], 00:20:59.729 | 99.99th=[ 157] 00:20:59.729 bw ( KiB/s): min= 512, max= 1024, per=3.97%, avg=796.21, stdev=146.36, samples=19 00:20:59.729 iops : min= 128, max= 256, avg=199.05, stdev=36.59, samples=19 00:20:59.729 lat (msec) : 50=13.89%, 100=64.21%, 250=21.90% 00:20:59.729 cpu : usr=33.22%, sys=1.21%, ctx=981, majf=0, minf=9 00:20:59.729 IO depths : 1=0.1%, 2=2.2%, 4=8.7%, 8=74.3%, 16=14.8%, 32=0.0%, >=64=0.0% 00:20:59.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.729 complete : 0=0.0%, 4=89.4%, 8=8.7%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.729 issued rwts: total=2023,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.729 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:59.729 filename2: (groupid=0, jobs=1): err= 0: pid=83568: Wed Oct 9 03:23:41 2024 00:20:59.729 read: IOPS=213, BW=853KiB/s (874kB/s)(8584KiB/10058msec) 00:20:59.729 slat (usec): min=3, max=10025, avg=30.20, stdev=329.14 00:20:59.729 clat (msec): min=6, max=146, avg=74.71, stdev=22.00 00:20:59.729 lat (msec): min=6, max=146, avg=74.74, stdev=21.99 00:20:59.729 clat percentiles (msec): 00:20:59.729 | 1.00th=[ 17], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 57], 00:20:59.729 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 79], 00:20:59.729 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 109], 00:20:59.729 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 129], 99.95th=[ 134], 00:20:59.729 | 99.99th=[ 146] 00:20:59.729 bw ( KiB/s): min= 658, max= 1368, per=4.25%, avg=853.75, stdev=142.39, samples=20 00:20:59.729 iops : min= 164, max= 342, avg=213.35, stdev=35.68, samples=20 00:20:59.729 lat (msec) : 10=0.65%, 20=1.12%, 50=13.19%, 100=69.38%, 250=15.66% 00:20:59.729 cpu : usr=36.24%, sys=1.41%, ctx=1143, majf=0, minf=9 00:20:59.729 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.5%, 16=16.6%, 32=0.0%, >=64=0.0% 00:20:59.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.729 complete : 0=0.0%, 4=87.7%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.729 issued rwts: total=2146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.729 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:59.729 filename2: (groupid=0, jobs=1): err= 0: pid=83569: Wed Oct 9 03:23:41 2024 00:20:59.729 read: IOPS=199, BW=799KiB/s (818kB/s)(8024KiB/10045msec) 00:20:59.729 slat (usec): min=4, max=8034, avg=33.16, stdev=268.75 00:20:59.729 clat (msec): min=32, max=157, avg=79.82, stdev=22.79 00:20:59.729 lat (msec): min=32, max=157, avg=79.85, stdev=22.80 00:20:59.729 clat percentiles (msec): 00:20:59.729 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 58], 00:20:59.729 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 82], 00:20:59.729 | 70.00th=[ 90], 80.00th=[ 104], 90.00th=[ 111], 95.00th=[ 120], 00:20:59.729 | 99.00th=[ 138], 99.50th=[ 148], 99.90th=[ 155], 99.95th=[ 159], 00:20:59.729 | 99.99th=[ 159] 00:20:59.729 bw ( KiB/s): min= 512, max= 944, per=3.98%, avg=798.30, stdev=135.56, samples=20 00:20:59.729 iops : min= 128, max= 236, avg=199.55, stdev=33.94, samples=20 00:20:59.729 lat (msec) : 50=11.27%, 100=66.00%, 250=22.73% 00:20:59.729 cpu : usr=40.22%, sys=1.68%, ctx=1254, majf=0, minf=9 00:20:59.729 IO depths : 1=0.1%, 2=1.8%, 4=7.4%, 8=75.5%, 16=15.2%, 32=0.0%, >=64=0.0% 00:20:59.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.729 complete : 0=0.0%, 4=89.3%, 8=9.1%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.729 issued rwts: total=2006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.729 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:59.729 filename2: (groupid=0, jobs=1): err= 0: pid=83570: Wed Oct 9 03:23:41 2024 00:20:59.729 read: IOPS=209, BW=838KiB/s (858kB/s)(8412KiB/10043msec) 00:20:59.729 slat (usec): min=5, max=8049, avg=23.61, stdev=175.36 00:20:59.729 clat (msec): min=35, max=143, avg=76.18, stdev=20.56 00:20:59.729 lat (msec): min=35, max=143, avg=76.21, stdev=20.55 00:20:59.729 clat percentiles (msec): 00:20:59.729 | 1.00th=[ 43], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 58], 00:20:59.729 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 81], 00:20:59.729 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 114], 00:20:59.729 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 132], 99.95th=[ 142], 00:20:59.729 | 99.99th=[ 144] 00:20:59.729 bw ( KiB/s): min= 656, max= 944, per=4.17%, avg=837.10, stdev=86.36, samples=20 00:20:59.729 iops : min= 164, max= 236, avg=209.25, stdev=21.63, samples=20 00:20:59.729 lat (msec) : 50=14.69%, 100=70.14%, 250=15.17% 00:20:59.729 cpu : usr=33.30%, sys=1.30%, ctx=923, majf=0, minf=9 00:20:59.729 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=82.0%, 16=16.3%, 32=0.0%, >=64=0.0% 00:20:59.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.729 complete : 0=0.0%, 4=87.6%, 8=12.1%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.729 issued rwts: total=2103,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.729 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:59.729 filename2: (groupid=0, jobs=1): err= 0: pid=83571: Wed Oct 9 03:23:41 2024 00:20:59.729 read: IOPS=214, BW=859KiB/s (879kB/s)(8624KiB/10044msec) 00:20:59.729 slat (usec): min=7, max=8064, avg=34.02, stdev=300.48 00:20:59.729 clat (msec): min=29, max=159, avg=74.30, stdev=21.20 00:20:59.729 lat (msec): min=29, max=159, avg=74.34, stdev=21.21 00:20:59.729 clat percentiles (msec): 00:20:59.729 | 1.00th=[ 39], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 53], 00:20:59.729 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 78], 00:20:59.729 | 70.00th=[ 83], 80.00th=[ 94], 90.00th=[ 107], 95.00th=[ 110], 00:20:59.729 | 99.00th=[ 125], 99.50th=[ 136], 99.90th=[ 142], 99.95th=[ 161], 00:20:59.729 | 99.99th=[ 161] 00:20:59.729 bw ( KiB/s): min= 688, max= 1000, per=4.28%, avg=858.20, stdev=93.49, samples=20 00:20:59.729 iops : min= 172, max= 250, avg=214.55, stdev=23.37, samples=20 00:20:59.729 lat (msec) : 50=16.79%, 100=67.67%, 250=15.54% 00:20:59.729 cpu : usr=39.77%, sys=1.73%, ctx=1200, majf=0, minf=9 00:20:59.729 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.3%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:59.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.729 complete : 0=0.0%, 4=87.1%, 8=12.8%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.729 issued rwts: total=2156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.729 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:59.729 filename2: (groupid=0, jobs=1): err= 0: pid=83572: Wed Oct 9 03:23:41 2024 00:20:59.729 read: IOPS=210, BW=840KiB/s (860kB/s)(8440KiB/10047msec) 00:20:59.729 slat (nsec): min=3720, max=63981, avg=15378.82, stdev=8088.57 00:20:59.729 clat (msec): min=2, max=143, avg=76.01, stdev=22.56 00:20:59.729 lat (msec): min=2, max=144, avg=76.02, stdev=22.56 00:20:59.729 clat percentiles (msec): 00:20:59.729 | 1.00th=[ 8], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 60], 00:20:59.729 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 82], 00:20:59.729 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 111], 00:20:59.729 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 140], 99.95th=[ 140], 00:20:59.729 | 99.99th=[ 144] 00:20:59.729 bw ( KiB/s): min= 640, max= 1298, per=4.17%, avg=837.70, stdev=133.17, samples=20 00:20:59.729 iops : min= 160, max= 324, avg=209.40, stdev=33.20, samples=20 00:20:59.729 lat (msec) : 4=0.09%, 10=2.13%, 20=0.05%, 50=12.80%, 100=69.19% 00:20:59.729 lat (msec) : 250=15.73% 00:20:59.729 cpu : usr=32.22%, sys=1.44%, ctx=1058, majf=0, minf=9 00:20:59.729 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=80.8%, 16=16.6%, 32=0.0%, >=64=0.0% 00:20:59.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.729 complete : 0=0.0%, 4=88.2%, 8=11.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.729 issued rwts: total=2110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.729 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:59.729 filename2: (groupid=0, jobs=1): err= 0: pid=83573: Wed Oct 9 03:23:41 2024 00:20:59.729 read: IOPS=208, BW=832KiB/s (852kB/s)(8360KiB/10043msec) 00:20:59.729 slat (usec): min=6, max=8029, avg=30.91, stdev=315.80 00:20:59.729 clat (msec): min=35, max=146, avg=76.66, stdev=21.07 00:20:59.729 lat (msec): min=35, max=146, avg=76.70, stdev=21.08 00:20:59.729 clat percentiles (msec): 00:20:59.729 | 1.00th=[ 38], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 59], 00:20:59.729 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 80], 00:20:59.729 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 112], 00:20:59.729 | 99.00th=[ 124], 99.50th=[ 132], 99.90th=[ 140], 99.95th=[ 140], 00:20:59.729 | 99.99th=[ 146] 00:20:59.729 bw ( KiB/s): min= 656, max= 1024, per=4.14%, avg=831.90, stdev=101.92, samples=20 00:20:59.729 iops : min= 164, max= 256, avg=207.95, stdev=25.51, samples=20 00:20:59.729 lat (msec) : 50=14.55%, 100=68.23%, 250=17.22% 00:20:59.729 cpu : usr=35.57%, sys=1.37%, ctx=1009, majf=0, minf=9 00:20:59.729 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=80.7%, 16=16.4%, 32=0.0%, >=64=0.0% 00:20:59.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.730 complete : 0=0.0%, 4=88.1%, 8=11.4%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.730 issued rwts: total=2090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.730 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:59.730 00:20:59.730 Run status group 0 (all jobs): 00:20:59.730 READ: bw=19.6MiB/s (20.5MB/s), 787KiB/s-880KiB/s (806kB/s-901kB/s), io=197MiB (207MB), run=10002-10058msec 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:59.730 bdev_null0 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:59.730 [2024-10-09 03:23:41.342381] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:59.730 bdev_null1 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:59.730 { 00:20:59.730 "params": { 00:20:59.730 "name": "Nvme$subsystem", 00:20:59.730 "trtype": "$TEST_TRANSPORT", 00:20:59.730 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.730 "adrfam": "ipv4", 00:20:59.730 "trsvcid": "$NVMF_PORT", 00:20:59.730 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.730 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.730 "hdgst": ${hdgst:-false}, 00:20:59.730 "ddgst": ${ddgst:-false} 00:20:59.730 }, 00:20:59.730 "method": "bdev_nvme_attach_controller" 00:20:59.730 } 00:20:59.730 EOF 00:20:59.730 )") 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:59.730 03:23:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:59.731 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:59.731 03:23:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:59.731 { 00:20:59.731 "params": { 00:20:59.731 "name": "Nvme$subsystem", 00:20:59.731 "trtype": "$TEST_TRANSPORT", 00:20:59.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.731 "adrfam": "ipv4", 00:20:59.731 "trsvcid": "$NVMF_PORT", 00:20:59.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.731 "hdgst": ${hdgst:-false}, 00:20:59.731 "ddgst": ${ddgst:-false} 00:20:59.731 }, 00:20:59.731 "method": "bdev_nvme_attach_controller" 00:20:59.731 } 00:20:59.731 EOF 00:20:59.731 )") 00:20:59.731 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:59.731 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:59.731 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:59.731 03:23:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:20:59.731 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:59.731 03:23:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:59.731 03:23:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:20:59.731 03:23:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:20:59.731 03:23:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:20:59.731 "params": { 00:20:59.731 "name": "Nvme0", 00:20:59.731 "trtype": "tcp", 00:20:59.731 "traddr": "10.0.0.3", 00:20:59.731 "adrfam": "ipv4", 00:20:59.731 "trsvcid": "4420", 00:20:59.731 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:59.731 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:59.731 "hdgst": false, 00:20:59.731 "ddgst": false 00:20:59.731 }, 00:20:59.731 "method": "bdev_nvme_attach_controller" 00:20:59.731 },{ 00:20:59.731 "params": { 00:20:59.731 "name": "Nvme1", 00:20:59.731 "trtype": "tcp", 00:20:59.731 "traddr": "10.0.0.3", 00:20:59.731 "adrfam": "ipv4", 00:20:59.731 "trsvcid": "4420", 00:20:59.731 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.731 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:59.731 "hdgst": false, 00:20:59.731 "ddgst": false 00:20:59.731 }, 00:20:59.731 "method": "bdev_nvme_attach_controller" 00:20:59.731 }' 00:20:59.731 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:59.731 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:59.731 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:59.731 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:59.731 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:59.731 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:59.731 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:59.731 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:59.731 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:59.731 03:23:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:59.731 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:59.731 ... 00:20:59.731 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:59.731 ... 00:20:59.731 fio-3.35 00:20:59.731 Starting 4 threads 00:21:05.002 00:21:05.002 filename0: (groupid=0, jobs=1): err= 0: pid=83714: Wed Oct 9 03:23:47 2024 00:21:05.002 read: IOPS=2060, BW=16.1MiB/s (16.9MB/s)(80.5MiB/5001msec) 00:21:05.002 slat (usec): min=3, max=545, avg=17.92, stdev=10.63 00:21:05.002 clat (usec): min=990, max=7649, avg=3829.84, stdev=960.67 00:21:05.002 lat (usec): min=1012, max=7672, avg=3847.76, stdev=961.47 00:21:05.002 clat percentiles (usec): 00:21:05.002 | 1.00th=[ 1745], 5.00th=[ 2114], 10.00th=[ 2409], 20.00th=[ 2769], 00:21:05.002 | 30.00th=[ 3294], 40.00th=[ 3752], 50.00th=[ 4113], 60.00th=[ 4293], 00:21:05.002 | 70.00th=[ 4490], 80.00th=[ 4686], 90.00th=[ 4948], 95.00th=[ 5080], 00:21:05.002 | 99.00th=[ 5342], 99.50th=[ 5473], 99.90th=[ 5932], 99.95th=[ 5997], 00:21:05.002 | 99.99th=[ 7504] 00:21:05.002 bw ( KiB/s): min=13936, max=18880, per=24.61%, avg=16405.00, stdev=1596.22, samples=9 00:21:05.002 iops : min= 1742, max= 2360, avg=2050.56, stdev=199.61, samples=9 00:21:05.002 lat (usec) : 1000=0.01% 00:21:05.002 lat (msec) : 2=2.91%, 4=43.05%, 10=54.03% 00:21:05.002 cpu : usr=92.88%, sys=6.16%, ctx=9, majf=0, minf=0 00:21:05.002 IO depths : 1=0.1%, 2=10.1%, 4=58.2%, 8=31.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:05.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.002 complete : 0=0.0%, 4=96.2%, 8=3.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.002 issued rwts: total=10307,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.002 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:05.002 filename0: (groupid=0, jobs=1): err= 0: pid=83715: Wed Oct 9 03:23:47 2024 00:21:05.002 read: IOPS=2269, BW=17.7MiB/s (18.6MB/s)(88.7MiB/5004msec) 00:21:05.002 slat (usec): min=6, max=112, avg=13.94, stdev= 7.40 00:21:05.002 clat (usec): min=847, max=6741, avg=3487.82, stdev=1022.03 00:21:05.002 lat (usec): min=854, max=6750, avg=3501.75, stdev=1021.86 00:21:05.002 clat percentiles (usec): 00:21:05.002 | 1.00th=[ 1500], 5.00th=[ 2008], 10.00th=[ 2114], 20.00th=[ 2409], 00:21:05.002 | 30.00th=[ 2704], 40.00th=[ 3064], 50.00th=[ 3621], 60.00th=[ 3916], 00:21:05.002 | 70.00th=[ 4228], 80.00th=[ 4490], 90.00th=[ 4817], 95.00th=[ 5080], 00:21:05.002 | 99.00th=[ 5276], 99.50th=[ 5342], 99.90th=[ 5473], 99.95th=[ 5604], 00:21:05.002 | 99.99th=[ 6587] 00:21:05.002 bw ( KiB/s): min=16496, max=20608, per=27.42%, avg=18277.33, stdev=1339.73, samples=9 00:21:05.002 iops : min= 2062, max= 2576, avg=2284.67, stdev=167.47, samples=9 00:21:05.002 lat (usec) : 1000=0.11% 00:21:05.002 lat (msec) : 2=4.60%, 4=57.48%, 10=37.80% 00:21:05.002 cpu : usr=92.96%, sys=6.06%, ctx=10, majf=0, minf=0 00:21:05.002 IO depths : 1=0.1%, 2=3.7%, 4=61.8%, 8=34.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:05.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.002 complete : 0=0.0%, 4=98.6%, 8=1.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.002 issued rwts: total=11358,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.002 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:05.002 filename1: (groupid=0, jobs=1): err= 0: pid=83716: Wed Oct 9 03:23:47 2024 00:21:05.002 read: IOPS=1794, BW=14.0MiB/s (14.7MB/s)(70.1MiB/5001msec) 00:21:05.002 slat (nsec): min=3430, max=93515, avg=17709.51, stdev=9806.41 00:21:05.002 clat (usec): min=878, max=7106, avg=4388.87, stdev=782.87 00:21:05.002 lat (usec): min=887, max=7167, avg=4406.58, stdev=782.72 00:21:05.002 clat percentiles (usec): 00:21:05.002 | 1.00th=[ 1696], 5.00th=[ 2769], 10.00th=[ 3523], 20.00th=[ 3884], 00:21:05.002 | 30.00th=[ 4146], 40.00th=[ 4359], 50.00th=[ 4555], 60.00th=[ 4621], 00:21:05.002 | 70.00th=[ 4817], 80.00th=[ 5014], 90.00th=[ 5211], 95.00th=[ 5342], 00:21:05.002 | 99.00th=[ 5800], 99.50th=[ 6063], 99.90th=[ 6587], 99.95th=[ 6652], 00:21:05.002 | 99.99th=[ 7111] 00:21:05.002 bw ( KiB/s): min=12432, max=18016, per=21.85%, avg=14566.78, stdev=1827.31, samples=9 00:21:05.002 iops : min= 1554, max= 2252, avg=1820.78, stdev=228.41, samples=9 00:21:05.002 lat (usec) : 1000=0.04% 00:21:05.002 lat (msec) : 2=1.96%, 4=21.45%, 10=76.55% 00:21:05.002 cpu : usr=93.32%, sys=5.64%, ctx=1038, majf=0, minf=0 00:21:05.002 IO depths : 1=0.2%, 2=21.4%, 4=52.0%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:05.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.002 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.002 issued rwts: total=8976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.002 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:05.002 filename1: (groupid=0, jobs=1): err= 0: pid=83717: Wed Oct 9 03:23:47 2024 00:21:05.002 read: IOPS=2210, BW=17.3MiB/s (18.1MB/s)(86.4MiB/5002msec) 00:21:05.002 slat (nsec): min=3808, max=89625, avg=16549.59, stdev=8696.23 00:21:05.002 clat (usec): min=942, max=7454, avg=3572.61, stdev=1034.07 00:21:05.002 lat (usec): min=950, max=7481, avg=3589.16, stdev=1034.05 00:21:05.002 clat percentiles (usec): 00:21:05.002 | 1.00th=[ 1827], 5.00th=[ 2040], 10.00th=[ 2147], 20.00th=[ 2442], 00:21:05.003 | 30.00th=[ 2769], 40.00th=[ 3261], 50.00th=[ 3687], 60.00th=[ 4047], 00:21:05.003 | 70.00th=[ 4293], 80.00th=[ 4555], 90.00th=[ 4948], 95.00th=[ 5145], 00:21:05.003 | 99.00th=[ 5407], 99.50th=[ 5473], 99.90th=[ 6259], 99.95th=[ 6587], 00:21:05.003 | 99.99th=[ 6652] 00:21:05.003 bw ( KiB/s): min=13824, max=20352, per=26.62%, avg=17747.56, stdev=2093.23, samples=9 00:21:05.003 iops : min= 1728, max= 2544, avg=2218.44, stdev=261.65, samples=9 00:21:05.003 lat (usec) : 1000=0.02% 00:21:05.003 lat (msec) : 2=3.61%, 4=54.97%, 10=41.40% 00:21:05.003 cpu : usr=91.84%, sys=6.94%, ctx=38, majf=0, minf=0 00:21:05.003 IO depths : 1=0.1%, 2=5.3%, 4=60.9%, 8=33.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:05.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.003 complete : 0=0.0%, 4=98.0%, 8=2.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.003 issued rwts: total=11057,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.003 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:05.003 00:21:05.003 Run status group 0 (all jobs): 00:21:05.003 READ: bw=65.1MiB/s (68.3MB/s), 14.0MiB/s-17.7MiB/s (14.7MB/s-18.6MB/s), io=326MiB (342MB), run=5001-5004msec 00:21:05.003 03:23:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:21:05.003 03:23:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:05.003 03:23:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:05.003 03:23:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:05.003 03:23:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:05.003 03:23:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:05.003 03:23:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.003 03:23:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.003 03:23:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.003 03:23:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:05.003 03:23:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.003 03:23:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.003 03:23:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.003 03:23:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:05.003 03:23:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:05.003 03:23:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:21:05.003 03:23:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:05.003 03:23:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.003 03:23:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.003 03:23:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.003 03:23:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:05.003 03:23:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.003 03:23:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.003 ************************************ 00:21:05.003 END TEST fio_dif_rand_params 00:21:05.003 ************************************ 00:21:05.003 03:23:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.003 00:21:05.003 real 0m23.610s 00:21:05.003 user 2m5.652s 00:21:05.003 sys 0m6.629s 00:21:05.003 03:23:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:05.003 03:23:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.003 03:23:47 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:21:05.003 03:23:47 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:05.003 03:23:47 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:05.003 03:23:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:05.003 ************************************ 00:21:05.003 START TEST fio_dif_digest 00:21:05.003 ************************************ 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:05.003 bdev_null0 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:05.003 [2024-10-09 03:23:47.566249] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:05.003 { 00:21:05.003 "params": { 00:21:05.003 "name": "Nvme$subsystem", 00:21:05.003 "trtype": "$TEST_TRANSPORT", 00:21:05.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.003 "adrfam": "ipv4", 00:21:05.003 "trsvcid": "$NVMF_PORT", 00:21:05.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.003 "hdgst": ${hdgst:-false}, 00:21:05.003 "ddgst": ${ddgst:-false} 00:21:05.003 }, 00:21:05.003 "method": "bdev_nvme_attach_controller" 00:21:05.003 } 00:21:05.003 EOF 00:21:05.003 )") 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:21:05.003 "params": { 00:21:05.003 "name": "Nvme0", 00:21:05.003 "trtype": "tcp", 00:21:05.003 "traddr": "10.0.0.3", 00:21:05.003 "adrfam": "ipv4", 00:21:05.003 "trsvcid": "4420", 00:21:05.003 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:05.003 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:05.003 "hdgst": true, 00:21:05.003 "ddgst": true 00:21:05.003 }, 00:21:05.003 "method": "bdev_nvme_attach_controller" 00:21:05.003 }' 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:05.003 03:23:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:05.004 03:23:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:05.004 03:23:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:05.004 03:23:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:05.004 03:23:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:05.004 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:05.004 ... 00:21:05.004 fio-3.35 00:21:05.004 Starting 3 threads 00:21:17.222 00:21:17.222 filename0: (groupid=0, jobs=1): err= 0: pid=83824: Wed Oct 9 03:23:58 2024 00:21:17.222 read: IOPS=228, BW=28.6MiB/s (30.0MB/s)(286MiB/10007msec) 00:21:17.222 slat (nsec): min=6682, max=51564, avg=13764.74, stdev=6343.92 00:21:17.222 clat (usec): min=8489, max=15451, avg=13083.07, stdev=851.79 00:21:17.222 lat (usec): min=8508, max=15466, avg=13096.83, stdev=852.95 00:21:17.222 clat percentiles (usec): 00:21:17.222 | 1.00th=[11600], 5.00th=[11731], 10.00th=[11863], 20.00th=[12125], 00:21:17.222 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13304], 60.00th=[13435], 00:21:17.222 | 70.00th=[13698], 80.00th=[13829], 90.00th=[13960], 95.00th=[14353], 00:21:17.222 | 99.00th=[14877], 99.50th=[15008], 99.90th=[15401], 99.95th=[15401], 00:21:17.222 | 99.99th=[15401] 00:21:17.222 bw ( KiB/s): min=26880, max=31488, per=33.29%, avg=29224.42, stdev=1389.82, samples=19 00:21:17.222 iops : min= 210, max= 246, avg=228.32, stdev=10.86, samples=19 00:21:17.222 lat (msec) : 10=0.13%, 20=99.87% 00:21:17.222 cpu : usr=92.98%, sys=6.43%, ctx=8, majf=0, minf=0 00:21:17.222 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:17.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.222 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.222 issued rwts: total=2289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:17.222 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:17.222 filename0: (groupid=0, jobs=1): err= 0: pid=83825: Wed Oct 9 03:23:58 2024 00:21:17.222 read: IOPS=228, BW=28.6MiB/s (30.0MB/s)(286MiB/10003msec) 00:21:17.222 slat (nsec): min=6313, max=85466, avg=12124.19, stdev=7006.54 00:21:17.223 clat (usec): min=11396, max=17079, avg=13096.37, stdev=852.31 00:21:17.223 lat (usec): min=11403, max=17113, avg=13108.49, stdev=852.65 00:21:17.223 clat percentiles (usec): 00:21:17.223 | 1.00th=[11600], 5.00th=[11731], 10.00th=[11863], 20.00th=[12125], 00:21:17.223 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13304], 60.00th=[13435], 00:21:17.223 | 70.00th=[13698], 80.00th=[13829], 90.00th=[13960], 95.00th=[14353], 00:21:17.223 | 99.00th=[14877], 99.50th=[15139], 99.90th=[17171], 99.95th=[17171], 00:21:17.223 | 99.99th=[17171] 00:21:17.223 bw ( KiB/s): min=26112, max=31488, per=33.28%, avg=29221.16, stdev=1563.90, samples=19 00:21:17.223 iops : min= 204, max= 246, avg=228.26, stdev=12.19, samples=19 00:21:17.223 lat (msec) : 20=100.00% 00:21:17.223 cpu : usr=91.20%, sys=8.03%, ctx=15, majf=0, minf=9 00:21:17.223 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:17.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.223 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.223 issued rwts: total=2286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:17.223 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:17.223 filename0: (groupid=0, jobs=1): err= 0: pid=83826: Wed Oct 9 03:23:58 2024 00:21:17.223 read: IOPS=228, BW=28.6MiB/s (30.0MB/s)(286MiB/10007msec) 00:21:17.223 slat (nsec): min=6375, max=66789, avg=14718.14, stdev=7548.29 00:21:17.223 clat (usec): min=8429, max=15454, avg=13080.31, stdev=863.25 00:21:17.223 lat (usec): min=8436, max=15467, avg=13095.03, stdev=864.35 00:21:17.223 clat percentiles (usec): 00:21:17.223 | 1.00th=[11600], 5.00th=[11731], 10.00th=[11863], 20.00th=[12125], 00:21:17.223 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13304], 60.00th=[13435], 00:21:17.223 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14091], 95.00th=[14353], 00:21:17.223 | 99.00th=[14877], 99.50th=[15008], 99.90th=[15401], 99.95th=[15401], 00:21:17.223 | 99.99th=[15401] 00:21:17.223 bw ( KiB/s): min=26880, max=31488, per=33.29%, avg=29224.42, stdev=1413.20, samples=19 00:21:17.223 iops : min= 210, max= 246, avg=228.32, stdev=11.04, samples=19 00:21:17.223 lat (msec) : 10=0.26%, 20=99.74% 00:21:17.223 cpu : usr=92.74%, sys=6.65%, ctx=10, majf=0, minf=0 00:21:17.223 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:17.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.223 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.223 issued rwts: total=2289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:17.223 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:17.223 00:21:17.223 Run status group 0 (all jobs): 00:21:17.223 READ: bw=85.7MiB/s (89.9MB/s), 28.6MiB/s-28.6MiB/s (30.0MB/s-30.0MB/s), io=858MiB (900MB), run=10003-10007msec 00:21:17.223 03:23:58 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:21:17.223 03:23:58 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:21:17.223 03:23:58 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:21:17.223 03:23:58 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:17.223 03:23:58 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:21:17.223 03:23:58 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:17.223 03:23:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.223 03:23:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:17.223 03:23:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.223 03:23:58 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:17.223 03:23:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.223 03:23:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:17.223 ************************************ 00:21:17.223 END TEST fio_dif_digest 00:21:17.223 ************************************ 00:21:17.223 03:23:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.223 00:21:17.223 real 0m11.142s 00:21:17.223 user 0m28.460s 00:21:17.223 sys 0m2.423s 00:21:17.223 03:23:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:17.223 03:23:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:17.223 03:23:58 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:21:17.223 03:23:58 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:21:17.223 03:23:58 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:17.223 03:23:58 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:21:17.223 03:23:58 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:17.223 03:23:58 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:21:17.223 03:23:58 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:17.223 03:23:58 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:17.223 rmmod nvme_tcp 00:21:17.223 rmmod nvme_fabrics 00:21:17.223 rmmod nvme_keyring 00:21:17.223 03:23:58 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:17.223 03:23:58 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:21:17.223 03:23:58 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:21:17.223 03:23:58 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 83074 ']' 00:21:17.223 03:23:58 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 83074 00:21:17.223 03:23:58 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 83074 ']' 00:21:17.223 03:23:58 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 83074 00:21:17.223 03:23:58 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:21:17.223 03:23:58 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:17.223 03:23:58 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83074 00:21:17.223 killing process with pid 83074 00:21:17.223 03:23:58 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:17.223 03:23:58 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:17.223 03:23:58 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83074' 00:21:17.223 03:23:58 nvmf_dif -- common/autotest_common.sh@969 -- # kill 83074 00:21:17.223 03:23:58 nvmf_dif -- common/autotest_common.sh@974 -- # wait 83074 00:21:17.223 03:23:59 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:21:17.223 03:23:59 nvmf_dif -- nvmf/common.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:17.223 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:17.223 Waiting for block devices as requested 00:21:17.223 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:17.223 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:17.223 03:23:59 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:17.223 03:23:59 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:17.223 03:23:59 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:21:17.223 03:23:59 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:21:17.223 03:23:59 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:21:17.223 03:23:59 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:17.223 03:23:59 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:17.223 03:23:59 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:17.223 03:23:59 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:17.223 03:23:59 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:17.223 03:23:59 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:17.223 03:23:59 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:17.223 03:23:59 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:17.223 03:23:59 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:17.223 03:23:59 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:17.223 03:23:59 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:17.223 03:23:59 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:17.223 03:23:59 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:17.223 03:23:59 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:17.223 03:23:59 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:17.223 03:23:59 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:17.223 03:23:59 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:17.223 03:23:59 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.223 03:23:59 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:17.223 03:23:59 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.223 03:23:59 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:21:17.223 ************************************ 00:21:17.223 END TEST nvmf_dif 00:21:17.223 ************************************ 00:21:17.223 00:21:17.223 real 1m0.526s 00:21:17.223 user 3m50.760s 00:21:17.223 sys 0m17.795s 00:21:17.223 03:23:59 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:17.223 03:23:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:17.223 03:23:59 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:17.223 03:23:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:17.223 03:23:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:17.223 03:23:59 -- common/autotest_common.sh@10 -- # set +x 00:21:17.223 ************************************ 00:21:17.223 START TEST nvmf_abort_qd_sizes 00:21:17.223 ************************************ 00:21:17.223 03:23:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:17.223 * Looking for test storage... 00:21:17.223 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:17.223 03:23:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:17.223 03:24:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:17.223 03:24:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:21:17.223 03:24:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:17.223 03:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:17.223 03:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:17.223 03:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:17.223 03:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:21:17.223 03:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:21:17.223 03:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:21:17.223 03:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:21:17.223 03:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:21:17.223 03:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:21:17.223 03:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:21:17.223 03:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:17.223 03:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:21:17.223 03:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:21:17.223 03:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:17.223 03:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:17.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.224 --rc genhtml_branch_coverage=1 00:21:17.224 --rc genhtml_function_coverage=1 00:21:17.224 --rc genhtml_legend=1 00:21:17.224 --rc geninfo_all_blocks=1 00:21:17.224 --rc geninfo_unexecuted_blocks=1 00:21:17.224 00:21:17.224 ' 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:17.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.224 --rc genhtml_branch_coverage=1 00:21:17.224 --rc genhtml_function_coverage=1 00:21:17.224 --rc genhtml_legend=1 00:21:17.224 --rc geninfo_all_blocks=1 00:21:17.224 --rc geninfo_unexecuted_blocks=1 00:21:17.224 00:21:17.224 ' 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:17.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.224 --rc genhtml_branch_coverage=1 00:21:17.224 --rc genhtml_function_coverage=1 00:21:17.224 --rc genhtml_legend=1 00:21:17.224 --rc geninfo_all_blocks=1 00:21:17.224 --rc geninfo_unexecuted_blocks=1 00:21:17.224 00:21:17.224 ' 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:17.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.224 --rc genhtml_branch_coverage=1 00:21:17.224 --rc genhtml_function_coverage=1 00:21:17.224 --rc genhtml_legend=1 00:21:17.224 --rc geninfo_all_blocks=1 00:21:17.224 --rc geninfo_unexecuted_blocks=1 00:21:17.224 00:21:17.224 ' 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:17.224 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@458 -- # nvmf_veth_init 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:17.224 Cannot find device "nvmf_init_br" 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:17.224 Cannot find device "nvmf_init_br2" 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:17.224 Cannot find device "nvmf_tgt_br" 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:17.224 Cannot find device "nvmf_tgt_br2" 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:17.224 Cannot find device "nvmf_init_br" 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:17.224 Cannot find device "nvmf_init_br2" 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:17.224 Cannot find device "nvmf_tgt_br" 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:17.224 Cannot find device "nvmf_tgt_br2" 00:21:17.224 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:17.225 Cannot find device "nvmf_br" 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:17.225 Cannot find device "nvmf_init_if" 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:17.225 Cannot find device "nvmf_init_if2" 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:17.225 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:17.225 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:17.225 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:17.225 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:21:17.225 00:21:17.225 --- 10.0.0.3 ping statistics --- 00:21:17.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.225 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:17.225 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:17.225 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:21:17.225 00:21:17.225 --- 10.0.0.4 ping statistics --- 00:21:17.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.225 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:17.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:17.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:21:17.225 00:21:17.225 --- 10.0.0.1 ping statistics --- 00:21:17.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.225 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:17.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:17.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:21:17.225 00:21:17.225 --- 10.0.0.2 ping statistics --- 00:21:17.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.225 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # return 0 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:21:17.225 03:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:18.161 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:18.161 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:18.161 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:18.161 03:24:01 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:18.161 03:24:01 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:18.161 03:24:01 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:18.161 03:24:01 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:18.161 03:24:01 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:18.161 03:24:01 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:18.161 03:24:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:21:18.161 03:24:01 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:18.161 03:24:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:18.161 03:24:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:18.161 03:24:01 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=84478 00:21:18.161 03:24:01 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:21:18.161 03:24:01 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 84478 00:21:18.161 03:24:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 84478 ']' 00:21:18.161 03:24:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.161 03:24:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:18.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.161 03:24:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.161 03:24:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:18.161 03:24:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:18.161 [2024-10-09 03:24:01.446391] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:21:18.161 [2024-10-09 03:24:01.446487] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.419 [2024-10-09 03:24:01.589408] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:18.419 [2024-10-09 03:24:01.703843] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.419 [2024-10-09 03:24:01.703900] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.419 [2024-10-09 03:24:01.703921] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:18.419 [2024-10-09 03:24:01.703936] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:18.419 [2024-10-09 03:24:01.703949] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.419 [2024-10-09 03:24:01.709083] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.419 [2024-10-09 03:24:01.709193] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:18.419 [2024-10-09 03:24:01.709750] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:21:18.419 [2024-10-09 03:24:01.709801] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.678 [2024-10-09 03:24:01.772529] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:19.245 03:24:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:19.245 03:24:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:21:19.245 03:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:19.245 03:24:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:19.245 03:24:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:19.245 03:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.245 03:24:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:21:19.245 03:24:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:21:19.245 03:24:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:21:19.246 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:21:19.246 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:21:19.246 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:21:19.246 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:19.246 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:21:19.246 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:21:19.246 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:21:19.246 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:21:19.246 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:21:19.246 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:21:19.246 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:21:19.246 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:21:19.246 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:21:19.246 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:21:19.246 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:19.505 03:24:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:19.505 ************************************ 00:21:19.505 START TEST spdk_target_abort 00:21:19.505 ************************************ 00:21:19.505 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:21:19.505 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:21:19.505 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:21:19.505 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.505 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:19.505 spdk_targetn1 00:21:19.505 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.505 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:19.505 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.505 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:19.505 [2024-10-09 03:24:02.658417] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.505 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.505 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:21:19.505 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.505 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:19.505 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.505 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:21:19.505 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.505 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:19.505 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.505 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:21:19.505 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.505 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:19.505 [2024-10-09 03:24:02.690716] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:19.506 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.506 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:21:19.506 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:19.506 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:19.506 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:21:19.506 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:19.506 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:19.506 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:19.506 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:19.506 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:19.506 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:19.506 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:19.506 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:19.506 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:19.506 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:19.506 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:21:19.506 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:19.506 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:19.506 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:19.506 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:19.506 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:19.506 03:24:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:22.819 Initializing NVMe Controllers 00:21:22.819 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:22.819 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:22.819 Initialization complete. Launching workers. 00:21:22.819 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10016, failed: 0 00:21:22.819 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1064, failed to submit 8952 00:21:22.819 success 823, unsuccessful 241, failed 0 00:21:22.819 03:24:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:22.819 03:24:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:26.103 Initializing NVMe Controllers 00:21:26.103 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:26.103 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:26.103 Initialization complete. Launching workers. 00:21:26.103 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8952, failed: 0 00:21:26.103 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1181, failed to submit 7771 00:21:26.103 success 358, unsuccessful 823, failed 0 00:21:26.103 03:24:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:26.103 03:24:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:29.393 Initializing NVMe Controllers 00:21:29.393 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:29.393 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:29.393 Initialization complete. Launching workers. 00:21:29.393 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32063, failed: 0 00:21:29.393 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2329, failed to submit 29734 00:21:29.393 success 482, unsuccessful 1847, failed 0 00:21:29.393 03:24:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:21:29.393 03:24:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.393 03:24:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:29.393 03:24:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.393 03:24:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:21:29.393 03:24:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.393 03:24:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:29.961 03:24:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.961 03:24:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84478 00:21:29.961 03:24:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 84478 ']' 00:21:29.961 03:24:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 84478 00:21:29.961 03:24:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:21:29.961 03:24:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:29.961 03:24:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84478 00:21:29.961 03:24:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:29.961 killing process with pid 84478 00:21:29.961 03:24:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:29.961 03:24:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84478' 00:21:29.961 03:24:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 84478 00:21:29.961 03:24:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 84478 00:21:30.220 00:21:30.220 real 0m10.799s 00:21:30.220 user 0m43.901s 00:21:30.220 sys 0m2.054s 00:21:30.220 03:24:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:30.220 03:24:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:30.220 ************************************ 00:21:30.220 END TEST spdk_target_abort 00:21:30.220 ************************************ 00:21:30.220 03:24:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:21:30.220 03:24:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:30.220 03:24:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:30.220 03:24:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:30.220 ************************************ 00:21:30.220 START TEST kernel_target_abort 00:21:30.220 ************************************ 00:21:30.220 03:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:21:30.220 03:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:21:30.220 03:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:21:30.220 03:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:30.220 03:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:30.220 03:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:30.220 03:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:30.220 03:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:21:30.220 03:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:30.220 03:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:21:30.220 03:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:21:30.220 03:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:21:30.220 03:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:30.220 03:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:30.220 03:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:21:30.220 03:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:30.220 03:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:30.220 03:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:30.220 03:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:21:30.220 03:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:21:30.220 03:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:21:30.220 03:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:30.220 03:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:30.787 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:30.787 Waiting for block devices as requested 00:21:30.787 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:30.787 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:30.788 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:21:30.788 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:30.788 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:21:30.788 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:21:30.788 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:30.788 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:30.788 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:21:30.788 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:30.788 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:31.047 No valid GPT data, bailing 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n2 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n2 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:31.047 No valid GPT data, bailing 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n2 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n3 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n3 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:31.047 No valid GPT data, bailing 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n3 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme1n1 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme1n1 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:31.047 No valid GPT data, bailing 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme1n1 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme1n1 ]] 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:31.047 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:31.306 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:31.306 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:21:31.306 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme1n1 00:21:31.306 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:21:31.306 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:21:31.306 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:21:31.306 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:21:31.306 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:21:31.306 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:31.306 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 --hostid=cb2c30f2-294c-46db-807f-ce0b3b357918 -a 10.0.0.1 -t tcp -s 4420 00:21:31.306 00:21:31.306 Discovery Log Number of Records 2, Generation counter 2 00:21:31.306 =====Discovery Log Entry 0====== 00:21:31.306 trtype: tcp 00:21:31.306 adrfam: ipv4 00:21:31.306 subtype: current discovery subsystem 00:21:31.306 treq: not specified, sq flow control disable supported 00:21:31.306 portid: 1 00:21:31.306 trsvcid: 4420 00:21:31.306 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:31.306 traddr: 10.0.0.1 00:21:31.306 eflags: none 00:21:31.306 sectype: none 00:21:31.306 =====Discovery Log Entry 1====== 00:21:31.306 trtype: tcp 00:21:31.306 adrfam: ipv4 00:21:31.306 subtype: nvme subsystem 00:21:31.306 treq: not specified, sq flow control disable supported 00:21:31.306 portid: 1 00:21:31.306 trsvcid: 4420 00:21:31.306 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:31.306 traddr: 10.0.0.1 00:21:31.306 eflags: none 00:21:31.306 sectype: none 00:21:31.306 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:21:31.306 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:31.306 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:31.306 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:21:31.306 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:31.306 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:31.307 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:31.307 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:31.307 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:31.307 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:31.307 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:31.307 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:31.307 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:31.307 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:31.307 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:21:31.307 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:31.307 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:21:31.307 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:31.307 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:31.307 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:31.307 03:24:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:34.594 Initializing NVMe Controllers 00:21:34.594 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:34.594 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:34.594 Initialization complete. Launching workers. 00:21:34.594 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30402, failed: 0 00:21:34.594 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30402, failed to submit 0 00:21:34.594 success 0, unsuccessful 30402, failed 0 00:21:34.594 03:24:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:34.594 03:24:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:37.886 Initializing NVMe Controllers 00:21:37.886 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:37.886 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:37.886 Initialization complete. Launching workers. 00:21:37.886 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66544, failed: 0 00:21:37.886 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28261, failed to submit 38283 00:21:37.886 success 0, unsuccessful 28261, failed 0 00:21:37.886 03:24:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:37.886 03:24:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:41.182 Initializing NVMe Controllers 00:21:41.183 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:41.183 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:41.183 Initialization complete. Launching workers. 00:21:41.183 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 78466, failed: 0 00:21:41.183 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19582, failed to submit 58884 00:21:41.183 success 0, unsuccessful 19582, failed 0 00:21:41.183 03:24:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:21:41.183 03:24:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:41.183 03:24:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:21:41.183 03:24:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:41.183 03:24:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:41.183 03:24:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:41.183 03:24:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:41.183 03:24:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:21:41.183 03:24:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:21:41.183 03:24:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:41.441 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:43.420 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:43.420 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:43.420 00:21:43.420 real 0m12.917s 00:21:43.420 user 0m6.065s 00:21:43.420 sys 0m4.295s 00:21:43.420 03:24:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:43.420 03:24:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:43.420 ************************************ 00:21:43.420 END TEST kernel_target_abort 00:21:43.420 ************************************ 00:21:43.420 03:24:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:43.420 03:24:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:21:43.420 03:24:26 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:43.420 03:24:26 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:21:43.420 03:24:26 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:43.420 03:24:26 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:21:43.420 03:24:26 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:43.420 03:24:26 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:43.420 rmmod nvme_tcp 00:21:43.420 rmmod nvme_fabrics 00:21:43.420 rmmod nvme_keyring 00:21:43.420 03:24:26 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:43.420 03:24:26 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:21:43.420 03:24:26 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:21:43.420 03:24:26 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 84478 ']' 00:21:43.420 03:24:26 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 84478 00:21:43.420 03:24:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 84478 ']' 00:21:43.420 03:24:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 84478 00:21:43.420 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (84478) - No such process 00:21:43.420 Process with pid 84478 is not found 00:21:43.420 03:24:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 84478 is not found' 00:21:43.420 03:24:26 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:21:43.420 03:24:26 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:43.679 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:43.679 Waiting for block devices as requested 00:21:43.679 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:43.937 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:43.937 03:24:27 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:43.937 03:24:27 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:43.937 03:24:27 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:21:43.937 03:24:27 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:21:43.937 03:24:27 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:43.937 03:24:27 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:21:43.937 03:24:27 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:43.937 03:24:27 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:43.937 03:24:27 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:43.937 03:24:27 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:43.937 03:24:27 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:43.937 03:24:27 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:43.937 03:24:27 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:43.937 03:24:27 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:43.937 03:24:27 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:43.937 03:24:27 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:43.937 03:24:27 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:43.937 03:24:27 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:44.196 03:24:27 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:44.196 03:24:27 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:44.196 03:24:27 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:44.196 03:24:27 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:44.196 03:24:27 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.196 03:24:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:44.196 03:24:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.196 03:24:27 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:21:44.196 00:21:44.196 real 0m27.425s 00:21:44.196 user 0m51.346s 00:21:44.196 sys 0m7.838s 00:21:44.196 03:24:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:44.196 03:24:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:44.196 ************************************ 00:21:44.196 END TEST nvmf_abort_qd_sizes 00:21:44.196 ************************************ 00:21:44.196 03:24:27 -- spdk/autotest.sh@288 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:44.196 03:24:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:44.196 03:24:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:44.196 03:24:27 -- common/autotest_common.sh@10 -- # set +x 00:21:44.196 ************************************ 00:21:44.196 START TEST keyring_file 00:21:44.196 ************************************ 00:21:44.196 03:24:27 keyring_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:44.196 * Looking for test storage... 00:21:44.196 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:44.196 03:24:27 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:44.196 03:24:27 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:44.196 03:24:27 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:21:44.456 03:24:27 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:44.456 03:24:27 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:44.456 03:24:27 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:44.456 03:24:27 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:44.456 03:24:27 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:21:44.456 03:24:27 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:21:44.456 03:24:27 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:21:44.456 03:24:27 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:21:44.456 03:24:27 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:21:44.456 03:24:27 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:21:44.456 03:24:27 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:21:44.456 03:24:27 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:44.456 03:24:27 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:21:44.456 03:24:27 keyring_file -- scripts/common.sh@345 -- # : 1 00:21:44.456 03:24:27 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:44.456 03:24:27 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:44.456 03:24:27 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:21:44.456 03:24:27 keyring_file -- scripts/common.sh@353 -- # local d=1 00:21:44.456 03:24:27 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:44.456 03:24:27 keyring_file -- scripts/common.sh@355 -- # echo 1 00:21:44.456 03:24:27 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:21:44.456 03:24:27 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:21:44.456 03:24:27 keyring_file -- scripts/common.sh@353 -- # local d=2 00:21:44.456 03:24:27 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:44.456 03:24:27 keyring_file -- scripts/common.sh@355 -- # echo 2 00:21:44.456 03:24:27 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:21:44.456 03:24:27 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:44.456 03:24:27 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:44.456 03:24:27 keyring_file -- scripts/common.sh@368 -- # return 0 00:21:44.456 03:24:27 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:44.456 03:24:27 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:44.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.456 --rc genhtml_branch_coverage=1 00:21:44.456 --rc genhtml_function_coverage=1 00:21:44.456 --rc genhtml_legend=1 00:21:44.456 --rc geninfo_all_blocks=1 00:21:44.456 --rc geninfo_unexecuted_blocks=1 00:21:44.456 00:21:44.456 ' 00:21:44.456 03:24:27 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:44.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.456 --rc genhtml_branch_coverage=1 00:21:44.456 --rc genhtml_function_coverage=1 00:21:44.456 --rc genhtml_legend=1 00:21:44.456 --rc geninfo_all_blocks=1 00:21:44.456 --rc geninfo_unexecuted_blocks=1 00:21:44.456 00:21:44.456 ' 00:21:44.456 03:24:27 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:44.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.456 --rc genhtml_branch_coverage=1 00:21:44.456 --rc genhtml_function_coverage=1 00:21:44.456 --rc genhtml_legend=1 00:21:44.456 --rc geninfo_all_blocks=1 00:21:44.456 --rc geninfo_unexecuted_blocks=1 00:21:44.456 00:21:44.456 ' 00:21:44.456 03:24:27 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:44.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.456 --rc genhtml_branch_coverage=1 00:21:44.456 --rc genhtml_function_coverage=1 00:21:44.456 --rc genhtml_legend=1 00:21:44.456 --rc geninfo_all_blocks=1 00:21:44.456 --rc geninfo_unexecuted_blocks=1 00:21:44.456 00:21:44.456 ' 00:21:44.456 03:24:27 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:44.456 03:24:27 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:44.456 03:24:27 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:21:44.456 03:24:27 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:44.456 03:24:27 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:44.456 03:24:27 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:44.456 03:24:27 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:44.456 03:24:27 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:44.456 03:24:27 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:44.456 03:24:27 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:44.456 03:24:27 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:44.456 03:24:27 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:44.456 03:24:27 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:44.456 03:24:27 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:21:44.456 03:24:27 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:21:44.456 03:24:27 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:44.456 03:24:27 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:44.456 03:24:27 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:44.456 03:24:27 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:44.456 03:24:27 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:44.456 03:24:27 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:21:44.456 03:24:27 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:44.456 03:24:27 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:44.456 03:24:27 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:44.456 03:24:27 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.456 03:24:27 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.456 03:24:27 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.456 03:24:27 keyring_file -- paths/export.sh@5 -- # export PATH 00:21:44.456 03:24:27 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.456 03:24:27 keyring_file -- nvmf/common.sh@51 -- # : 0 00:21:44.456 03:24:27 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:44.456 03:24:27 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:44.456 03:24:27 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:44.456 03:24:27 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:44.456 03:24:27 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:44.456 03:24:27 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:44.456 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:44.456 03:24:27 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:44.456 03:24:27 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:44.456 03:24:27 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:44.456 03:24:27 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:44.456 03:24:27 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:44.456 03:24:27 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:44.456 03:24:27 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:21:44.456 03:24:27 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:21:44.456 03:24:27 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:21:44.456 03:24:27 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:44.456 03:24:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:44.456 03:24:27 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:44.456 03:24:27 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:44.456 03:24:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:44.456 03:24:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:44.456 03:24:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.DEhEI42PdE 00:21:44.456 03:24:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:44.456 03:24:27 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:44.456 03:24:27 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:21:44.456 03:24:27 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:21:44.456 03:24:27 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:21:44.456 03:24:27 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:21:44.456 03:24:27 keyring_file -- nvmf/common.sh@731 -- # python - 00:21:44.456 03:24:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.DEhEI42PdE 00:21:44.456 03:24:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.DEhEI42PdE 00:21:44.456 03:24:27 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.DEhEI42PdE 00:21:44.456 03:24:27 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:21:44.457 03:24:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:44.457 03:24:27 keyring_file -- keyring/common.sh@17 -- # name=key1 00:21:44.457 03:24:27 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:44.457 03:24:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:44.457 03:24:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:44.457 03:24:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.QpiZhX1oLA 00:21:44.457 03:24:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:44.457 03:24:27 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:44.457 03:24:27 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:21:44.457 03:24:27 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:21:44.457 03:24:27 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:21:44.457 03:24:27 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:21:44.457 03:24:27 keyring_file -- nvmf/common.sh@731 -- # python - 00:21:44.457 03:24:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.QpiZhX1oLA 00:21:44.457 03:24:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.QpiZhX1oLA 00:21:44.457 03:24:27 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.QpiZhX1oLA 00:21:44.457 03:24:27 keyring_file -- keyring/file.sh@30 -- # tgtpid=85394 00:21:44.457 03:24:27 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:44.457 03:24:27 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85394 00:21:44.457 03:24:27 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 85394 ']' 00:21:44.457 03:24:27 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.457 03:24:27 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:44.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.457 03:24:27 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.457 03:24:27 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:44.457 03:24:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:44.715 [2024-10-09 03:24:27.807856] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:21:44.715 [2024-10-09 03:24:27.807952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85394 ] 00:21:44.715 [2024-10-09 03:24:27.947141] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.974 [2024-10-09 03:24:28.054767] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.974 [2024-10-09 03:24:28.130351] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:45.909 03:24:28 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:45.909 03:24:28 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:21:45.909 03:24:28 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:21:45.909 03:24:28 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.909 03:24:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:45.909 [2024-10-09 03:24:28.854189] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:45.909 null0 00:21:45.909 [2024-10-09 03:24:28.886175] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:45.909 [2024-10-09 03:24:28.886389] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:45.909 03:24:28 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.910 03:24:28 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:45.910 03:24:28 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:21:45.910 03:24:28 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:45.910 03:24:28 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:45.910 03:24:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:45.910 03:24:28 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:45.910 03:24:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:45.910 03:24:28 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:45.910 03:24:28 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.910 03:24:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:45.910 [2024-10-09 03:24:28.914167] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:21:45.910 request: 00:21:45.910 { 00:21:45.910 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:21:45.910 "secure_channel": false, 00:21:45.910 "listen_address": { 00:21:45.910 "trtype": "tcp", 00:21:45.910 "traddr": "127.0.0.1", 00:21:45.910 "trsvcid": "4420" 00:21:45.910 }, 00:21:45.910 "method": "nvmf_subsystem_add_listener", 00:21:45.910 "req_id": 1 00:21:45.910 } 00:21:45.910 Got JSON-RPC error response 00:21:45.910 response: 00:21:45.910 { 00:21:45.910 "code": -32602, 00:21:45.910 "message": "Invalid parameters" 00:21:45.910 } 00:21:45.910 03:24:28 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:45.910 03:24:28 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:21:45.910 03:24:28 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:45.910 03:24:28 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:45.910 03:24:28 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:45.910 03:24:28 keyring_file -- keyring/file.sh@47 -- # bperfpid=85411 00:21:45.910 03:24:28 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85411 /var/tmp/bperf.sock 00:21:45.910 03:24:28 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:21:45.910 03:24:28 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 85411 ']' 00:21:45.910 03:24:28 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:45.910 03:24:28 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:45.910 03:24:28 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:45.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:45.910 03:24:28 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:45.910 03:24:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:45.910 [2024-10-09 03:24:28.979977] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:21:45.910 [2024-10-09 03:24:28.980079] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85411 ] 00:21:45.910 [2024-10-09 03:24:29.114438] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.169 [2024-10-09 03:24:29.213161] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.169 [2024-10-09 03:24:29.275069] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:46.736 03:24:29 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:46.736 03:24:29 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:21:46.736 03:24:29 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DEhEI42PdE 00:21:46.736 03:24:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DEhEI42PdE 00:21:46.994 03:24:30 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.QpiZhX1oLA 00:21:46.994 03:24:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.QpiZhX1oLA 00:21:47.252 03:24:30 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:21:47.252 03:24:30 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:21:47.252 03:24:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:47.252 03:24:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:47.252 03:24:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:47.510 03:24:30 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.DEhEI42PdE == \/\t\m\p\/\t\m\p\.\D\E\h\E\I\4\2\P\d\E ]] 00:21:47.510 03:24:30 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:21:47.510 03:24:30 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:21:47.510 03:24:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:47.510 03:24:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:47.510 03:24:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:47.769 03:24:31 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.QpiZhX1oLA == \/\t\m\p\/\t\m\p\.\Q\p\i\Z\h\X\1\o\L\A ]] 00:21:47.769 03:24:31 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:21:47.769 03:24:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:47.769 03:24:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:47.769 03:24:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:47.769 03:24:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:47.769 03:24:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:48.027 03:24:31 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:21:48.027 03:24:31 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:21:48.027 03:24:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:48.027 03:24:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:48.027 03:24:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:48.027 03:24:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:48.027 03:24:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:48.286 03:24:31 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:21:48.286 03:24:31 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:48.286 03:24:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:48.544 [2024-10-09 03:24:31.760931] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:48.544 nvme0n1 00:21:48.802 03:24:31 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:21:48.802 03:24:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:48.802 03:24:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:48.803 03:24:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:48.803 03:24:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:48.803 03:24:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:49.061 03:24:32 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:21:49.061 03:24:32 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:21:49.061 03:24:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:49.061 03:24:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:49.061 03:24:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:49.061 03:24:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:49.061 03:24:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:49.320 03:24:32 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:21:49.320 03:24:32 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:49.579 Running I/O for 1 seconds... 00:21:50.514 12080.00 IOPS, 47.19 MiB/s 00:21:50.514 Latency(us) 00:21:50.514 [2024-10-09T03:24:33.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.514 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:21:50.514 nvme0n1 : 1.01 12128.18 47.38 0.00 0.00 10523.64 4259.84 16443.58 00:21:50.514 [2024-10-09T03:24:33.817Z] =================================================================================================================== 00:21:50.514 [2024-10-09T03:24:33.817Z] Total : 12128.18 47.38 0.00 0.00 10523.64 4259.84 16443.58 00:21:50.514 { 00:21:50.514 "results": [ 00:21:50.514 { 00:21:50.514 "job": "nvme0n1", 00:21:50.514 "core_mask": "0x2", 00:21:50.514 "workload": "randrw", 00:21:50.514 "percentage": 50, 00:21:50.514 "status": "finished", 00:21:50.514 "queue_depth": 128, 00:21:50.514 "io_size": 4096, 00:21:50.514 "runtime": 1.006664, 00:21:50.514 "iops": 12128.17782298761, 00:21:50.514 "mibps": 47.375694621045355, 00:21:50.514 "io_failed": 0, 00:21:50.514 "io_timeout": 0, 00:21:50.514 "avg_latency_us": 10523.63867489706, 00:21:50.514 "min_latency_us": 4259.84, 00:21:50.514 "max_latency_us": 16443.578181818182 00:21:50.514 } 00:21:50.514 ], 00:21:50.514 "core_count": 1 00:21:50.514 } 00:21:50.514 03:24:33 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:50.514 03:24:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:50.773 03:24:33 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:21:50.773 03:24:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:50.773 03:24:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:50.773 03:24:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:50.773 03:24:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:50.773 03:24:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:51.032 03:24:34 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:21:51.032 03:24:34 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:21:51.032 03:24:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:51.032 03:24:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:51.032 03:24:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:51.032 03:24:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:51.032 03:24:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:51.290 03:24:34 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:21:51.290 03:24:34 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:51.290 03:24:34 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:21:51.290 03:24:34 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:51.290 03:24:34 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:21:51.290 03:24:34 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:51.290 03:24:34 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:21:51.290 03:24:34 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:51.290 03:24:34 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:51.290 03:24:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:51.549 [2024-10-09 03:24:34.763381] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:51.549 [2024-10-09 03:24:34.763917] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13456a0 (107): Transport endpoint is not connected 00:21:51.549 [2024-10-09 03:24:34.764900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13456a0 (9): Bad file descriptor 00:21:51.550 [2024-10-09 03:24:34.765897] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:51.550 [2024-10-09 03:24:34.765950] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:51.550 [2024-10-09 03:24:34.766006] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:21:51.550 [2024-10-09 03:24:34.766028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:51.550 request: 00:21:51.550 { 00:21:51.550 "name": "nvme0", 00:21:51.550 "trtype": "tcp", 00:21:51.550 "traddr": "127.0.0.1", 00:21:51.550 "adrfam": "ipv4", 00:21:51.550 "trsvcid": "4420", 00:21:51.550 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:51.550 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:51.550 "prchk_reftag": false, 00:21:51.550 "prchk_guard": false, 00:21:51.550 "hdgst": false, 00:21:51.550 "ddgst": false, 00:21:51.550 "psk": "key1", 00:21:51.550 "allow_unrecognized_csi": false, 00:21:51.550 "method": "bdev_nvme_attach_controller", 00:21:51.550 "req_id": 1 00:21:51.550 } 00:21:51.550 Got JSON-RPC error response 00:21:51.550 response: 00:21:51.550 { 00:21:51.550 "code": -5, 00:21:51.550 "message": "Input/output error" 00:21:51.550 } 00:21:51.550 03:24:34 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:21:51.550 03:24:34 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:51.550 03:24:34 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:51.550 03:24:34 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:51.550 03:24:34 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:21:51.550 03:24:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:51.550 03:24:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:51.550 03:24:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:51.550 03:24:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:51.550 03:24:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:51.810 03:24:35 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:21:51.810 03:24:35 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:21:51.810 03:24:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:51.810 03:24:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:51.810 03:24:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:51.810 03:24:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:51.810 03:24:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:52.102 03:24:35 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:21:52.102 03:24:35 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:21:52.102 03:24:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:52.362 03:24:35 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:21:52.362 03:24:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:21:52.621 03:24:35 keyring_file -- keyring/file.sh@78 -- # jq length 00:21:52.621 03:24:35 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:21:52.621 03:24:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:52.879 03:24:36 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:21:52.879 03:24:36 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.DEhEI42PdE 00:21:52.879 03:24:36 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.DEhEI42PdE 00:21:52.879 03:24:36 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:21:52.879 03:24:36 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.DEhEI42PdE 00:21:52.879 03:24:36 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:21:52.879 03:24:36 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:52.879 03:24:36 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:21:52.879 03:24:36 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:52.879 03:24:36 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DEhEI42PdE 00:21:52.879 03:24:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DEhEI42PdE 00:21:53.137 [2024-10-09 03:24:36.388027] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.DEhEI42PdE': 0100660 00:21:53.137 [2024-10-09 03:24:36.388094] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:53.137 request: 00:21:53.137 { 00:21:53.137 "name": "key0", 00:21:53.137 "path": "/tmp/tmp.DEhEI42PdE", 00:21:53.137 "method": "keyring_file_add_key", 00:21:53.137 "req_id": 1 00:21:53.137 } 00:21:53.137 Got JSON-RPC error response 00:21:53.137 response: 00:21:53.137 { 00:21:53.137 "code": -1, 00:21:53.137 "message": "Operation not permitted" 00:21:53.137 } 00:21:53.137 03:24:36 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:21:53.137 03:24:36 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:53.137 03:24:36 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:53.137 03:24:36 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:53.137 03:24:36 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.DEhEI42PdE 00:21:53.137 03:24:36 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DEhEI42PdE 00:21:53.137 03:24:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DEhEI42PdE 00:21:53.396 03:24:36 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.DEhEI42PdE 00:21:53.655 03:24:36 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:21:53.655 03:24:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:53.655 03:24:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:53.655 03:24:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:53.655 03:24:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:53.655 03:24:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:53.655 03:24:36 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:21:53.655 03:24:36 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:53.655 03:24:36 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:21:53.655 03:24:36 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:53.655 03:24:36 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:21:53.655 03:24:36 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:53.655 03:24:36 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:21:53.655 03:24:36 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:53.655 03:24:36 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:53.655 03:24:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:53.913 [2024-10-09 03:24:37.115300] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.DEhEI42PdE': No such file or directory 00:21:53.914 [2024-10-09 03:24:37.115333] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:21:53.914 [2024-10-09 03:24:37.115352] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:21:53.914 [2024-10-09 03:24:37.115361] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:21:53.914 [2024-10-09 03:24:37.115370] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:53.914 [2024-10-09 03:24:37.115378] bdev_nvme.c:6438:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:21:53.914 request: 00:21:53.914 { 00:21:53.914 "name": "nvme0", 00:21:53.914 "trtype": "tcp", 00:21:53.914 "traddr": "127.0.0.1", 00:21:53.914 "adrfam": "ipv4", 00:21:53.914 "trsvcid": "4420", 00:21:53.914 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:53.914 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:53.914 "prchk_reftag": false, 00:21:53.914 "prchk_guard": false, 00:21:53.914 "hdgst": false, 00:21:53.914 "ddgst": false, 00:21:53.914 "psk": "key0", 00:21:53.914 "allow_unrecognized_csi": false, 00:21:53.914 "method": "bdev_nvme_attach_controller", 00:21:53.914 "req_id": 1 00:21:53.914 } 00:21:53.914 Got JSON-RPC error response 00:21:53.914 response: 00:21:53.914 { 00:21:53.914 "code": -19, 00:21:53.914 "message": "No such device" 00:21:53.914 } 00:21:53.914 03:24:37 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:21:53.914 03:24:37 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:53.914 03:24:37 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:53.914 03:24:37 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:53.914 03:24:37 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:21:53.914 03:24:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:54.172 03:24:37 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:54.173 03:24:37 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:54.173 03:24:37 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:54.173 03:24:37 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:54.173 03:24:37 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:54.173 03:24:37 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:54.173 03:24:37 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.1bCQbybRPD 00:21:54.173 03:24:37 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:54.173 03:24:37 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:54.173 03:24:37 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:21:54.173 03:24:37 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:21:54.173 03:24:37 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:21:54.173 03:24:37 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:21:54.173 03:24:37 keyring_file -- nvmf/common.sh@731 -- # python - 00:21:54.431 03:24:37 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.1bCQbybRPD 00:21:54.431 03:24:37 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.1bCQbybRPD 00:21:54.431 03:24:37 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.1bCQbybRPD 00:21:54.431 03:24:37 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1bCQbybRPD 00:21:54.431 03:24:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1bCQbybRPD 00:21:54.689 03:24:37 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:54.690 03:24:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:54.948 nvme0n1 00:21:54.948 03:24:38 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:21:54.948 03:24:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:54.948 03:24:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:54.948 03:24:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:54.948 03:24:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:54.948 03:24:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:55.207 03:24:38 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:21:55.207 03:24:38 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:21:55.207 03:24:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:55.520 03:24:38 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:21:55.520 03:24:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:55.520 03:24:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:55.520 03:24:38 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:21:55.520 03:24:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:55.779 03:24:38 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:21:55.779 03:24:38 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:21:55.779 03:24:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:55.779 03:24:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:55.779 03:24:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:55.779 03:24:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:55.779 03:24:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:56.037 03:24:39 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:21:56.037 03:24:39 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:56.037 03:24:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:56.296 03:24:39 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:21:56.296 03:24:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:56.296 03:24:39 keyring_file -- keyring/file.sh@105 -- # jq length 00:21:56.555 03:24:39 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:21:56.555 03:24:39 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1bCQbybRPD 00:21:56.555 03:24:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1bCQbybRPD 00:21:56.555 03:24:39 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.QpiZhX1oLA 00:21:56.555 03:24:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.QpiZhX1oLA 00:21:56.813 03:24:40 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:56.813 03:24:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:57.072 nvme0n1 00:21:57.331 03:24:40 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:21:57.331 03:24:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:21:57.590 03:24:40 keyring_file -- keyring/file.sh@113 -- # config='{ 00:21:57.590 "subsystems": [ 00:21:57.590 { 00:21:57.590 "subsystem": "keyring", 00:21:57.590 "config": [ 00:21:57.590 { 00:21:57.590 "method": "keyring_file_add_key", 00:21:57.590 "params": { 00:21:57.590 "name": "key0", 00:21:57.590 "path": "/tmp/tmp.1bCQbybRPD" 00:21:57.590 } 00:21:57.590 }, 00:21:57.590 { 00:21:57.590 "method": "keyring_file_add_key", 00:21:57.590 "params": { 00:21:57.590 "name": "key1", 00:21:57.590 "path": "/tmp/tmp.QpiZhX1oLA" 00:21:57.590 } 00:21:57.590 } 00:21:57.590 ] 00:21:57.590 }, 00:21:57.590 { 00:21:57.590 "subsystem": "iobuf", 00:21:57.590 "config": [ 00:21:57.590 { 00:21:57.590 "method": "iobuf_set_options", 00:21:57.590 "params": { 00:21:57.590 "small_pool_count": 8192, 00:21:57.590 "large_pool_count": 1024, 00:21:57.590 "small_bufsize": 8192, 00:21:57.590 "large_bufsize": 135168 00:21:57.590 } 00:21:57.590 } 00:21:57.590 ] 00:21:57.590 }, 00:21:57.590 { 00:21:57.590 "subsystem": "sock", 00:21:57.590 "config": [ 00:21:57.590 { 00:21:57.590 "method": "sock_set_default_impl", 00:21:57.590 "params": { 00:21:57.590 "impl_name": "uring" 00:21:57.590 } 00:21:57.590 }, 00:21:57.590 { 00:21:57.590 "method": "sock_impl_set_options", 00:21:57.590 "params": { 00:21:57.590 "impl_name": "ssl", 00:21:57.590 "recv_buf_size": 4096, 00:21:57.590 "send_buf_size": 4096, 00:21:57.590 "enable_recv_pipe": true, 00:21:57.590 "enable_quickack": false, 00:21:57.590 "enable_placement_id": 0, 00:21:57.590 "enable_zerocopy_send_server": true, 00:21:57.590 "enable_zerocopy_send_client": false, 00:21:57.590 "zerocopy_threshold": 0, 00:21:57.590 "tls_version": 0, 00:21:57.590 "enable_ktls": false 00:21:57.590 } 00:21:57.590 }, 00:21:57.590 { 00:21:57.590 "method": "sock_impl_set_options", 00:21:57.590 "params": { 00:21:57.590 "impl_name": "posix", 00:21:57.590 "recv_buf_size": 2097152, 00:21:57.590 "send_buf_size": 2097152, 00:21:57.590 "enable_recv_pipe": true, 00:21:57.590 "enable_quickack": false, 00:21:57.590 "enable_placement_id": 0, 00:21:57.590 "enable_zerocopy_send_server": true, 00:21:57.590 "enable_zerocopy_send_client": false, 00:21:57.590 "zerocopy_threshold": 0, 00:21:57.590 "tls_version": 0, 00:21:57.590 "enable_ktls": false 00:21:57.590 } 00:21:57.590 }, 00:21:57.590 { 00:21:57.590 "method": "sock_impl_set_options", 00:21:57.590 "params": { 00:21:57.590 "impl_name": "uring", 00:21:57.590 "recv_buf_size": 2097152, 00:21:57.590 "send_buf_size": 2097152, 00:21:57.590 "enable_recv_pipe": true, 00:21:57.590 "enable_quickack": false, 00:21:57.590 "enable_placement_id": 0, 00:21:57.590 "enable_zerocopy_send_server": false, 00:21:57.590 "enable_zerocopy_send_client": false, 00:21:57.590 "zerocopy_threshold": 0, 00:21:57.590 "tls_version": 0, 00:21:57.590 "enable_ktls": false 00:21:57.590 } 00:21:57.590 } 00:21:57.590 ] 00:21:57.590 }, 00:21:57.590 { 00:21:57.590 "subsystem": "vmd", 00:21:57.590 "config": [] 00:21:57.590 }, 00:21:57.590 { 00:21:57.590 "subsystem": "accel", 00:21:57.590 "config": [ 00:21:57.590 { 00:21:57.590 "method": "accel_set_options", 00:21:57.590 "params": { 00:21:57.590 "small_cache_size": 128, 00:21:57.590 "large_cache_size": 16, 00:21:57.590 "task_count": 2048, 00:21:57.590 "sequence_count": 2048, 00:21:57.590 "buf_count": 2048 00:21:57.590 } 00:21:57.590 } 00:21:57.590 ] 00:21:57.590 }, 00:21:57.590 { 00:21:57.590 "subsystem": "bdev", 00:21:57.590 "config": [ 00:21:57.590 { 00:21:57.590 "method": "bdev_set_options", 00:21:57.590 "params": { 00:21:57.590 "bdev_io_pool_size": 65535, 00:21:57.590 "bdev_io_cache_size": 256, 00:21:57.590 "bdev_auto_examine": true, 00:21:57.590 "iobuf_small_cache_size": 128, 00:21:57.590 "iobuf_large_cache_size": 16 00:21:57.590 } 00:21:57.590 }, 00:21:57.590 { 00:21:57.590 "method": "bdev_raid_set_options", 00:21:57.590 "params": { 00:21:57.590 "process_window_size_kb": 1024, 00:21:57.590 "process_max_bandwidth_mb_sec": 0 00:21:57.590 } 00:21:57.590 }, 00:21:57.590 { 00:21:57.590 "method": "bdev_iscsi_set_options", 00:21:57.590 "params": { 00:21:57.590 "timeout_sec": 30 00:21:57.590 } 00:21:57.590 }, 00:21:57.590 { 00:21:57.590 "method": "bdev_nvme_set_options", 00:21:57.590 "params": { 00:21:57.590 "action_on_timeout": "none", 00:21:57.590 "timeout_us": 0, 00:21:57.590 "timeout_admin_us": 0, 00:21:57.590 "keep_alive_timeout_ms": 10000, 00:21:57.590 "arbitration_burst": 0, 00:21:57.590 "low_priority_weight": 0, 00:21:57.590 "medium_priority_weight": 0, 00:21:57.590 "high_priority_weight": 0, 00:21:57.590 "nvme_adminq_poll_period_us": 10000, 00:21:57.590 "nvme_ioq_poll_period_us": 0, 00:21:57.590 "io_queue_requests": 512, 00:21:57.590 "delay_cmd_submit": true, 00:21:57.590 "transport_retry_count": 4, 00:21:57.590 "bdev_retry_count": 3, 00:21:57.590 "transport_ack_timeout": 0, 00:21:57.590 "ctrlr_loss_timeout_sec": 0, 00:21:57.590 "reconnect_delay_sec": 0, 00:21:57.590 "fast_io_fail_timeout_sec": 0, 00:21:57.590 "disable_auto_failback": false, 00:21:57.590 "generate_uuids": false, 00:21:57.590 "transport_tos": 0, 00:21:57.590 "nvme_error_stat": false, 00:21:57.590 "rdma_srq_size": 0, 00:21:57.590 "io_path_stat": false, 00:21:57.590 "allow_accel_sequence": false, 00:21:57.590 "rdma_max_cq_size": 0, 00:21:57.590 "rdma_cm_event_timeout_ms": 0, 00:21:57.590 "dhchap_digests": [ 00:21:57.590 "sha256", 00:21:57.590 "sha384", 00:21:57.590 "sha512" 00:21:57.590 ], 00:21:57.590 "dhchap_dhgroups": [ 00:21:57.590 "null", 00:21:57.590 "ffdhe2048", 00:21:57.590 "ffdhe3072", 00:21:57.590 "ffdhe4096", 00:21:57.590 "ffdhe6144", 00:21:57.590 "ffdhe8192" 00:21:57.590 ] 00:21:57.590 } 00:21:57.590 }, 00:21:57.590 { 00:21:57.590 "method": "bdev_nvme_attach_controller", 00:21:57.590 "params": { 00:21:57.590 "name": "nvme0", 00:21:57.590 "trtype": "TCP", 00:21:57.590 "adrfam": "IPv4", 00:21:57.590 "traddr": "127.0.0.1", 00:21:57.590 "trsvcid": "4420", 00:21:57.590 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:57.590 "prchk_reftag": false, 00:21:57.590 "prchk_guard": false, 00:21:57.590 "ctrlr_loss_timeout_sec": 0, 00:21:57.590 "reconnect_delay_sec": 0, 00:21:57.590 "fast_io_fail_timeout_sec": 0, 00:21:57.590 "psk": "key0", 00:21:57.590 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:57.590 "hdgst": false, 00:21:57.590 "ddgst": false, 00:21:57.590 "multipath": "multipath" 00:21:57.590 } 00:21:57.590 }, 00:21:57.590 { 00:21:57.590 "method": "bdev_nvme_set_hotplug", 00:21:57.590 "params": { 00:21:57.591 "period_us": 100000, 00:21:57.591 "enable": false 00:21:57.591 } 00:21:57.591 }, 00:21:57.591 { 00:21:57.591 "method": "bdev_wait_for_examine" 00:21:57.591 } 00:21:57.591 ] 00:21:57.591 }, 00:21:57.591 { 00:21:57.591 "subsystem": "nbd", 00:21:57.591 "config": [] 00:21:57.591 } 00:21:57.591 ] 00:21:57.591 }' 00:21:57.591 03:24:40 keyring_file -- keyring/file.sh@115 -- # killprocess 85411 00:21:57.591 03:24:40 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 85411 ']' 00:21:57.591 03:24:40 keyring_file -- common/autotest_common.sh@954 -- # kill -0 85411 00:21:57.591 03:24:40 keyring_file -- common/autotest_common.sh@955 -- # uname 00:21:57.591 03:24:40 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:57.591 03:24:40 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85411 00:21:57.591 killing process with pid 85411 00:21:57.591 Received shutdown signal, test time was about 1.000000 seconds 00:21:57.591 00:21:57.591 Latency(us) 00:21:57.591 [2024-10-09T03:24:40.894Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.591 [2024-10-09T03:24:40.894Z] =================================================================================================================== 00:21:57.591 [2024-10-09T03:24:40.894Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:57.591 03:24:40 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:57.591 03:24:40 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:57.591 03:24:40 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85411' 00:21:57.591 03:24:40 keyring_file -- common/autotest_common.sh@969 -- # kill 85411 00:21:57.591 03:24:40 keyring_file -- common/autotest_common.sh@974 -- # wait 85411 00:21:57.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:57.850 03:24:40 keyring_file -- keyring/file.sh@118 -- # bperfpid=85663 00:21:57.850 03:24:40 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:21:57.850 03:24:40 keyring_file -- keyring/file.sh@120 -- # waitforlisten 85663 /var/tmp/bperf.sock 00:21:57.850 03:24:40 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 85663 ']' 00:21:57.850 03:24:40 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:57.850 03:24:40 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:21:57.850 "subsystems": [ 00:21:57.850 { 00:21:57.850 "subsystem": "keyring", 00:21:57.850 "config": [ 00:21:57.850 { 00:21:57.850 "method": "keyring_file_add_key", 00:21:57.850 "params": { 00:21:57.850 "name": "key0", 00:21:57.850 "path": "/tmp/tmp.1bCQbybRPD" 00:21:57.850 } 00:21:57.850 }, 00:21:57.850 { 00:21:57.850 "method": "keyring_file_add_key", 00:21:57.850 "params": { 00:21:57.850 "name": "key1", 00:21:57.850 "path": "/tmp/tmp.QpiZhX1oLA" 00:21:57.850 } 00:21:57.850 } 00:21:57.850 ] 00:21:57.850 }, 00:21:57.850 { 00:21:57.850 "subsystem": "iobuf", 00:21:57.850 "config": [ 00:21:57.850 { 00:21:57.850 "method": "iobuf_set_options", 00:21:57.850 "params": { 00:21:57.850 "small_pool_count": 8192, 00:21:57.850 "large_pool_count": 1024, 00:21:57.850 "small_bufsize": 8192, 00:21:57.850 "large_bufsize": 135168 00:21:57.850 } 00:21:57.850 } 00:21:57.850 ] 00:21:57.850 }, 00:21:57.850 { 00:21:57.850 "subsystem": "sock", 00:21:57.850 "config": [ 00:21:57.850 { 00:21:57.850 "method": "sock_set_default_impl", 00:21:57.850 "params": { 00:21:57.850 "impl_name": "uring" 00:21:57.850 } 00:21:57.850 }, 00:21:57.850 { 00:21:57.850 "method": "sock_impl_set_options", 00:21:57.850 "params": { 00:21:57.850 "impl_name": "ssl", 00:21:57.850 "recv_buf_size": 4096, 00:21:57.850 "send_buf_size": 4096, 00:21:57.850 "enable_recv_pipe": true, 00:21:57.850 "enable_quickack": false, 00:21:57.850 "enable_placement_id": 0, 00:21:57.850 "enable_zerocopy_send_server": true, 00:21:57.850 "enable_zerocopy_send_client": false, 00:21:57.850 "zerocopy_threshold": 0, 00:21:57.850 "tls_version": 0, 00:21:57.850 "enable_ktls": false 00:21:57.850 } 00:21:57.850 }, 00:21:57.850 { 00:21:57.850 "method": "sock_impl_set_options", 00:21:57.850 "params": { 00:21:57.850 "impl_name": "posix", 00:21:57.850 "recv_buf_size": 2097152, 00:21:57.850 "send_buf_size": 2097152, 00:21:57.850 "enable_recv_pipe": true, 00:21:57.850 "enable_quickack": false, 00:21:57.850 "enable_placement_id": 0, 00:21:57.851 "enable_zerocopy_send_server": true, 00:21:57.851 "enable_zerocopy_send_client": false, 00:21:57.851 "zerocopy_threshold": 0, 00:21:57.851 "tls_version": 0, 00:21:57.851 "enable_ktls": false 00:21:57.851 } 00:21:57.851 }, 00:21:57.851 { 00:21:57.851 "method": "sock_impl_set_options", 00:21:57.851 "params": { 00:21:57.851 "impl_name": "uring", 00:21:57.851 "recv_buf_size": 2097152, 00:21:57.851 "send_buf_size": 2097152, 00:21:57.851 "enable_recv_pipe": true, 00:21:57.851 "enable_quickack": false, 00:21:57.851 "enable_placement_id": 0, 00:21:57.851 "enable_zerocopy_send_server": false, 00:21:57.851 "enable_zerocopy_send_client": false, 00:21:57.851 "zerocopy_threshold": 0, 00:21:57.851 "tls_version": 0, 00:21:57.851 "enable_ktls": false 00:21:57.851 } 00:21:57.851 } 00:21:57.851 ] 00:21:57.851 }, 00:21:57.851 { 00:21:57.851 "subsystem": "vmd", 00:21:57.851 "config": [] 00:21:57.851 }, 00:21:57.851 { 00:21:57.851 "subsystem": "accel", 00:21:57.851 "config": [ 00:21:57.851 { 00:21:57.851 "method": "accel_set_options", 00:21:57.851 "params": { 00:21:57.851 "small_cache_size": 128, 00:21:57.851 "large_cache_size": 16, 00:21:57.851 "task_count": 2048, 00:21:57.851 "sequence_count": 2048, 00:21:57.851 "buf_count": 2048 00:21:57.851 } 00:21:57.851 } 00:21:57.851 ] 00:21:57.851 }, 00:21:57.851 { 00:21:57.851 "subsystem": "bdev", 00:21:57.851 "config": [ 00:21:57.851 { 00:21:57.851 "method": "bdev_set_options", 00:21:57.851 "params": { 00:21:57.851 "bdev_io_pool_size": 65535, 00:21:57.851 "bdev_io_cache_size": 256, 00:21:57.851 "bdev_auto_examine": true, 00:21:57.851 "iobuf_small_cache_size": 128, 00:21:57.851 "iobuf_large_cache_size": 16 00:21:57.851 } 00:21:57.851 }, 00:21:57.851 { 00:21:57.851 "method": "bdev_raid_set_options", 00:21:57.851 "params": { 00:21:57.851 "process_window_size_kb": 1024, 00:21:57.851 "process_max_bandwidth_mb_sec": 0 00:21:57.851 } 00:21:57.851 }, 00:21:57.851 { 00:21:57.851 "method": "bdev_iscsi_set_options", 00:21:57.851 "params": { 00:21:57.851 "timeout_sec": 30 00:21:57.851 } 00:21:57.851 }, 00:21:57.851 { 00:21:57.851 "method": "bdev_nvme_set_options", 00:21:57.851 "params": { 00:21:57.851 "action_on_timeout": "none", 00:21:57.851 "timeout_us": 0, 00:21:57.851 "timeout_admin_us": 0, 00:21:57.851 "keep_alive_timeout_ms": 10000, 00:21:57.851 "arbitration_burst": 0, 00:21:57.851 "low_priority_weight": 0, 00:21:57.851 "medium_priority_weight": 0, 00:21:57.851 "high_priority_weight": 0, 00:21:57.851 "nvme_adminq_poll_period_us": 10000, 00:21:57.851 "nvme_ioq_poll_period_us": 0, 00:21:57.851 "io_queue_requests": 512, 00:21:57.851 "delay_cmd_submit": true, 00:21:57.851 "transport_retry_count": 4, 00:21:57.851 "bdev_retry_count": 3, 00:21:57.851 "transport_ack_timeout": 0, 00:21:57.851 "ctrlr_loss_timeout_sec": 0, 00:21:57.851 "reconnect_delay_sec": 0, 00:21:57.851 "fast_io_fail_timeout_sec": 0, 00:21:57.851 "disable_auto_failback": false, 00:21:57.851 "generate_uuids": false, 00:21:57.851 "transport_tos": 0, 00:21:57.851 "nvme_error_stat": false, 00:21:57.851 "rdma_srq_size": 0, 00:21:57.851 "io_path_stat": false, 00:21:57.851 "allow_accel_sequence": false, 00:21:57.851 "rdma_max_cq_size": 0, 00:21:57.851 "rdma_cm_event_timeout_ms": 0, 00:21:57.851 "dhchap_digests": [ 00:21:57.851 "sha256", 00:21:57.851 "sha384", 00:21:57.851 "sha512" 00:21:57.851 ], 00:21:57.851 "dhchap_dhgroups": [ 00:21:57.851 "null", 00:21:57.851 "ffdhe2048", 00:21:57.851 "ffdhe3072", 00:21:57.851 "ffdhe4096", 00:21:57.851 "ffdhe6144", 00:21:57.851 "ffdhe8192" 00:21:57.851 ] 00:21:57.851 } 00:21:57.851 }, 00:21:57.851 { 00:21:57.851 "method": "bdev_nvme_attach_controller", 00:21:57.851 "params": { 00:21:57.851 "name": "nvme0", 00:21:57.851 "trtype": "TCP", 00:21:57.851 "adrfam": "IPv4", 00:21:57.851 "traddr": "127.0.0.1", 00:21:57.851 "trsvcid": "4420", 00:21:57.851 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:57.851 "prchk_reftag": false, 00:21:57.851 "prchk_guard": false, 00:21:57.851 "ctrlr_loss_timeout_sec": 0, 00:21:57.851 "reconnect_delay_sec": 0, 00:21:57.851 "fast_io_fail_timeout_sec": 0, 00:21:57.851 "psk": "key0", 00:21:57.851 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:57.851 "hdgst": false, 00:21:57.851 "ddgst": false, 00:21:57.851 "multipath": "multipath" 00:21:57.851 } 00:21:57.851 }, 00:21:57.851 { 00:21:57.851 "method": "bdev_nvme_set_hotplug", 00:21:57.851 "params": { 00:21:57.851 "period_us": 100000, 00:21:57.851 "enable": false 00:21:57.851 } 00:21:57.851 }, 00:21:57.851 { 00:21:57.851 "method": "bdev_wait_for_examine" 00:21:57.851 } 00:21:57.851 ] 00:21:57.851 }, 00:21:57.851 { 00:21:57.851 "subsystem": "nbd", 00:21:57.851 "config": [] 00:21:57.851 } 00:21:57.851 ] 00:21:57.851 }' 00:21:57.851 03:24:40 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:57.851 03:24:40 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:57.851 03:24:40 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:57.851 03:24:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:57.851 [2024-10-09 03:24:41.041913] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:21:57.851 [2024-10-09 03:24:41.042022] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85663 ] 00:21:58.110 [2024-10-09 03:24:41.177504] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.110 [2024-10-09 03:24:41.279096] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.369 [2024-10-09 03:24:41.432433] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:58.369 [2024-10-09 03:24:41.497437] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:58.936 03:24:41 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:58.936 03:24:41 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:21:58.936 03:24:41 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:21:58.936 03:24:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:58.936 03:24:41 keyring_file -- keyring/file.sh@121 -- # jq length 00:21:58.936 03:24:42 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:21:59.195 03:24:42 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:21:59.195 03:24:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:59.195 03:24:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:59.195 03:24:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:59.195 03:24:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:59.195 03:24:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:59.454 03:24:42 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:21:59.454 03:24:42 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:21:59.454 03:24:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:59.454 03:24:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:59.454 03:24:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:59.454 03:24:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:59.454 03:24:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:59.713 03:24:42 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:21:59.713 03:24:42 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:21:59.713 03:24:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:21:59.713 03:24:42 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:21:59.972 03:24:43 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:21:59.972 03:24:43 keyring_file -- keyring/file.sh@1 -- # cleanup 00:21:59.972 03:24:43 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.1bCQbybRPD /tmp/tmp.QpiZhX1oLA 00:21:59.972 03:24:43 keyring_file -- keyring/file.sh@20 -- # killprocess 85663 00:21:59.972 03:24:43 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 85663 ']' 00:21:59.972 03:24:43 keyring_file -- common/autotest_common.sh@954 -- # kill -0 85663 00:21:59.972 03:24:43 keyring_file -- common/autotest_common.sh@955 -- # uname 00:21:59.972 03:24:43 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:59.972 03:24:43 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85663 00:21:59.972 killing process with pid 85663 00:21:59.972 Received shutdown signal, test time was about 1.000000 seconds 00:21:59.972 00:21:59.972 Latency(us) 00:21:59.972 [2024-10-09T03:24:43.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.972 [2024-10-09T03:24:43.275Z] =================================================================================================================== 00:21:59.972 [2024-10-09T03:24:43.275Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:59.972 03:24:43 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:59.972 03:24:43 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:59.972 03:24:43 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85663' 00:21:59.972 03:24:43 keyring_file -- common/autotest_common.sh@969 -- # kill 85663 00:21:59.972 03:24:43 keyring_file -- common/autotest_common.sh@974 -- # wait 85663 00:22:00.230 03:24:43 keyring_file -- keyring/file.sh@21 -- # killprocess 85394 00:22:00.230 03:24:43 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 85394 ']' 00:22:00.230 03:24:43 keyring_file -- common/autotest_common.sh@954 -- # kill -0 85394 00:22:00.230 03:24:43 keyring_file -- common/autotest_common.sh@955 -- # uname 00:22:00.230 03:24:43 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:00.230 03:24:43 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85394 00:22:00.230 killing process with pid 85394 00:22:00.230 03:24:43 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:00.230 03:24:43 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:00.230 03:24:43 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85394' 00:22:00.230 03:24:43 keyring_file -- common/autotest_common.sh@969 -- # kill 85394 00:22:00.230 03:24:43 keyring_file -- common/autotest_common.sh@974 -- # wait 85394 00:22:00.797 00:22:00.797 real 0m16.498s 00:22:00.797 user 0m40.669s 00:22:00.797 sys 0m3.081s 00:22:00.797 03:24:43 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:00.797 ************************************ 00:22:00.797 END TEST keyring_file 00:22:00.797 ************************************ 00:22:00.797 03:24:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:00.797 03:24:43 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:22:00.797 03:24:43 -- spdk/autotest.sh@290 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:22:00.797 03:24:43 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:00.797 03:24:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:00.797 03:24:43 -- common/autotest_common.sh@10 -- # set +x 00:22:00.797 ************************************ 00:22:00.797 START TEST keyring_linux 00:22:00.797 ************************************ 00:22:00.797 03:24:43 keyring_linux -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:22:00.797 Joined session keyring: 883320981 00:22:00.797 * Looking for test storage... 00:22:00.797 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:22:00.797 03:24:44 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:00.797 03:24:44 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:22:00.797 03:24:44 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:01.057 03:24:44 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:01.057 03:24:44 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:01.057 03:24:44 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:01.057 03:24:44 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:01.057 03:24:44 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:22:01.057 03:24:44 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:22:01.057 03:24:44 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:22:01.057 03:24:44 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:22:01.057 03:24:44 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:22:01.057 03:24:44 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:22:01.057 03:24:44 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:22:01.057 03:24:44 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:01.057 03:24:44 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:22:01.057 03:24:44 keyring_linux -- scripts/common.sh@345 -- # : 1 00:22:01.057 03:24:44 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:01.057 03:24:44 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:01.057 03:24:44 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:22:01.057 03:24:44 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:22:01.057 03:24:44 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:01.057 03:24:44 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:22:01.057 03:24:44 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:22:01.057 03:24:44 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:22:01.057 03:24:44 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:22:01.057 03:24:44 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:01.057 03:24:44 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:22:01.057 03:24:44 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:22:01.057 03:24:44 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:01.057 03:24:44 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:01.057 03:24:44 keyring_linux -- scripts/common.sh@368 -- # return 0 00:22:01.057 03:24:44 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:01.057 03:24:44 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:01.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.057 --rc genhtml_branch_coverage=1 00:22:01.057 --rc genhtml_function_coverage=1 00:22:01.057 --rc genhtml_legend=1 00:22:01.057 --rc geninfo_all_blocks=1 00:22:01.057 --rc geninfo_unexecuted_blocks=1 00:22:01.057 00:22:01.057 ' 00:22:01.057 03:24:44 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:01.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.057 --rc genhtml_branch_coverage=1 00:22:01.057 --rc genhtml_function_coverage=1 00:22:01.057 --rc genhtml_legend=1 00:22:01.057 --rc geninfo_all_blocks=1 00:22:01.057 --rc geninfo_unexecuted_blocks=1 00:22:01.057 00:22:01.057 ' 00:22:01.057 03:24:44 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:01.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.057 --rc genhtml_branch_coverage=1 00:22:01.057 --rc genhtml_function_coverage=1 00:22:01.057 --rc genhtml_legend=1 00:22:01.057 --rc geninfo_all_blocks=1 00:22:01.057 --rc geninfo_unexecuted_blocks=1 00:22:01.057 00:22:01.057 ' 00:22:01.057 03:24:44 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:01.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.057 --rc genhtml_branch_coverage=1 00:22:01.057 --rc genhtml_function_coverage=1 00:22:01.057 --rc genhtml_legend=1 00:22:01.057 --rc geninfo_all_blocks=1 00:22:01.057 --rc geninfo_unexecuted_blocks=1 00:22:01.057 00:22:01.057 ' 00:22:01.057 03:24:44 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:22:01.057 03:24:44 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:01.057 03:24:44 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:22:01.057 03:24:44 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:01.057 03:24:44 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:01.057 03:24:44 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:01.057 03:24:44 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:01.057 03:24:44 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:01.057 03:24:44 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:01.057 03:24:44 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:01.057 03:24:44 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:01.057 03:24:44 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:01.057 03:24:44 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:01.057 03:24:44 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb2c30f2-294c-46db-807f-ce0b3b357918 00:22:01.057 03:24:44 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=cb2c30f2-294c-46db-807f-ce0b3b357918 00:22:01.057 03:24:44 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:01.057 03:24:44 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:01.057 03:24:44 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:01.057 03:24:44 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:01.057 03:24:44 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:01.057 03:24:44 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:22:01.057 03:24:44 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:01.057 03:24:44 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:01.057 03:24:44 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:01.057 03:24:44 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.057 03:24:44 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.057 03:24:44 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.057 03:24:44 keyring_linux -- paths/export.sh@5 -- # export PATH 00:22:01.057 03:24:44 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.057 03:24:44 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:22:01.057 03:24:44 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:01.057 03:24:44 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:01.057 03:24:44 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:01.057 03:24:44 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:01.057 03:24:44 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:01.057 03:24:44 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:01.057 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:01.057 03:24:44 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:01.057 03:24:44 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:01.057 03:24:44 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:01.057 03:24:44 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:22:01.057 03:24:44 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:22:01.057 03:24:44 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:22:01.057 03:24:44 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:22:01.057 03:24:44 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:22:01.057 03:24:44 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:22:01.057 03:24:44 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:22:01.057 03:24:44 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:22:01.058 03:24:44 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:22:01.058 03:24:44 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:01.058 03:24:44 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:22:01.058 03:24:44 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:22:01.058 03:24:44 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:01.058 03:24:44 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:01.058 03:24:44 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:22:01.058 03:24:44 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:22:01.058 03:24:44 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:22:01.058 03:24:44 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:22:01.058 03:24:44 keyring_linux -- nvmf/common.sh@731 -- # python - 00:22:01.058 03:24:44 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:22:01.058 /tmp/:spdk-test:key0 00:22:01.058 03:24:44 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:22:01.058 03:24:44 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:22:01.058 03:24:44 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:22:01.058 03:24:44 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:22:01.058 03:24:44 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:22:01.058 03:24:44 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:22:01.058 03:24:44 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:22:01.058 03:24:44 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:22:01.058 03:24:44 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:22:01.058 03:24:44 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:22:01.058 03:24:44 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:22:01.058 03:24:44 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:22:01.058 03:24:44 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:22:01.058 03:24:44 keyring_linux -- nvmf/common.sh@731 -- # python - 00:22:01.058 03:24:44 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:22:01.058 /tmp/:spdk-test:key1 00:22:01.058 03:24:44 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:22:01.058 03:24:44 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85790 00:22:01.058 03:24:44 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:01.058 03:24:44 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85790 00:22:01.058 03:24:44 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 85790 ']' 00:22:01.058 03:24:44 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.058 03:24:44 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:01.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.058 03:24:44 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.058 03:24:44 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:01.058 03:24:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:01.058 [2024-10-09 03:24:44.338008] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:22:01.058 [2024-10-09 03:24:44.338132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85790 ] 00:22:01.316 [2024-10-09 03:24:44.477938] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.316 [2024-10-09 03:24:44.587458] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.575 [2024-10-09 03:24:44.686285] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:02.144 03:24:45 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:02.144 03:24:45 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:22:02.144 03:24:45 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:22:02.144 03:24:45 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.144 03:24:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:02.144 [2024-10-09 03:24:45.375803] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.144 null0 00:22:02.144 [2024-10-09 03:24:45.407774] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:02.144 [2024-10-09 03:24:45.407967] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:02.144 03:24:45 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.144 03:24:45 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:22:02.144 1002169374 00:22:02.144 03:24:45 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:22:02.144 759681895 00:22:02.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:02.144 03:24:45 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85808 00:22:02.144 03:24:45 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:22:02.144 03:24:45 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85808 /var/tmp/bperf.sock 00:22:02.144 03:24:45 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 85808 ']' 00:22:02.144 03:24:45 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:02.144 03:24:45 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:02.144 03:24:45 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:02.144 03:24:45 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:02.144 03:24:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:02.403 [2024-10-09 03:24:45.494334] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:22:02.403 [2024-10-09 03:24:45.494633] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85808 ] 00:22:02.403 [2024-10-09 03:24:45.634985] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.669 [2024-10-09 03:24:45.743174] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.238 03:24:46 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:03.238 03:24:46 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:22:03.238 03:24:46 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:22:03.238 03:24:46 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:22:03.496 03:24:46 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:22:03.496 03:24:46 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:03.755 [2024-10-09 03:24:47.004400] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:03.755 03:24:47 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:03.755 03:24:47 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:04.014 [2024-10-09 03:24:47.299247] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:04.273 nvme0n1 00:22:04.273 03:24:47 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:22:04.273 03:24:47 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:22:04.273 03:24:47 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:04.273 03:24:47 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:04.273 03:24:47 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:04.273 03:24:47 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:04.532 03:24:47 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:22:04.532 03:24:47 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:04.532 03:24:47 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:22:04.532 03:24:47 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:22:04.532 03:24:47 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:04.532 03:24:47 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:04.532 03:24:47 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:22:04.791 03:24:47 keyring_linux -- keyring/linux.sh@25 -- # sn=1002169374 00:22:04.791 03:24:47 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:22:04.791 03:24:47 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:04.791 03:24:47 keyring_linux -- keyring/linux.sh@26 -- # [[ 1002169374 == \1\0\0\2\1\6\9\3\7\4 ]] 00:22:04.791 03:24:47 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1002169374 00:22:04.791 03:24:47 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:22:04.791 03:24:47 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:04.791 Running I/O for 1 seconds... 00:22:05.727 13717.00 IOPS, 53.58 MiB/s 00:22:05.727 Latency(us) 00:22:05.727 [2024-10-09T03:24:49.030Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.727 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:05.727 nvme0n1 : 1.01 13722.20 53.60 0.00 0.00 9281.72 2561.86 10902.81 00:22:05.727 [2024-10-09T03:24:49.030Z] =================================================================================================================== 00:22:05.727 [2024-10-09T03:24:49.030Z] Total : 13722.20 53.60 0.00 0.00 9281.72 2561.86 10902.81 00:22:05.727 { 00:22:05.727 "results": [ 00:22:05.727 { 00:22:05.727 "job": "nvme0n1", 00:22:05.727 "core_mask": "0x2", 00:22:05.727 "workload": "randread", 00:22:05.727 "status": "finished", 00:22:05.727 "queue_depth": 128, 00:22:05.727 "io_size": 4096, 00:22:05.727 "runtime": 1.008949, 00:22:05.727 "iops": 13722.200031914397, 00:22:05.727 "mibps": 53.602343874665614, 00:22:05.727 "io_failed": 0, 00:22:05.727 "io_timeout": 0, 00:22:05.727 "avg_latency_us": 9281.72184562855, 00:22:05.727 "min_latency_us": 2561.8618181818183, 00:22:05.727 "max_latency_us": 10902.807272727272 00:22:05.727 } 00:22:05.727 ], 00:22:05.727 "core_count": 1 00:22:05.727 } 00:22:05.986 03:24:49 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:05.986 03:24:49 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:06.245 03:24:49 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:22:06.245 03:24:49 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:22:06.245 03:24:49 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:06.245 03:24:49 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:06.245 03:24:49 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:06.245 03:24:49 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:06.505 03:24:49 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:22:06.505 03:24:49 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:06.505 03:24:49 keyring_linux -- keyring/linux.sh@23 -- # return 00:22:06.505 03:24:49 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:06.505 03:24:49 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:22:06.505 03:24:49 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:06.505 03:24:49 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:22:06.505 03:24:49 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:06.505 03:24:49 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:22:06.505 03:24:49 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:06.505 03:24:49 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:06.505 03:24:49 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:06.764 [2024-10-09 03:24:49.841792] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:06.764 [2024-10-09 03:24:49.842466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2269aa0 (107): Transport endpoint is not connected 00:22:06.764 [2024-10-09 03:24:49.843452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2269aa0 (9): Bad file descriptor 00:22:06.764 [2024-10-09 03:24:49.844450] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:06.764 [2024-10-09 03:24:49.844619] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:22:06.764 [2024-10-09 03:24:49.844733] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:22:06.764 [2024-10-09 03:24:49.844851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:06.764 request: 00:22:06.764 { 00:22:06.764 "name": "nvme0", 00:22:06.764 "trtype": "tcp", 00:22:06.764 "traddr": "127.0.0.1", 00:22:06.764 "adrfam": "ipv4", 00:22:06.764 "trsvcid": "4420", 00:22:06.764 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:06.764 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:06.764 "prchk_reftag": false, 00:22:06.764 "prchk_guard": false, 00:22:06.764 "hdgst": false, 00:22:06.764 "ddgst": false, 00:22:06.764 "psk": ":spdk-test:key1", 00:22:06.764 "allow_unrecognized_csi": false, 00:22:06.764 "method": "bdev_nvme_attach_controller", 00:22:06.764 "req_id": 1 00:22:06.764 } 00:22:06.764 Got JSON-RPC error response 00:22:06.764 response: 00:22:06.764 { 00:22:06.764 "code": -5, 00:22:06.764 "message": "Input/output error" 00:22:06.764 } 00:22:06.764 03:24:49 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:22:06.764 03:24:49 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:06.764 03:24:49 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:06.764 03:24:49 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:06.764 03:24:49 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:22:06.764 03:24:49 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:06.764 03:24:49 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:22:06.764 03:24:49 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:22:06.764 03:24:49 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:22:06.764 03:24:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:06.764 03:24:49 keyring_linux -- keyring/linux.sh@33 -- # sn=1002169374 00:22:06.764 03:24:49 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1002169374 00:22:06.764 1 links removed 00:22:06.764 03:24:49 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:06.764 03:24:49 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:22:06.764 03:24:49 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:22:06.764 03:24:49 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:22:06.764 03:24:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:22:06.764 03:24:49 keyring_linux -- keyring/linux.sh@33 -- # sn=759681895 00:22:06.764 03:24:49 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 759681895 00:22:06.764 1 links removed 00:22:06.764 03:24:49 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85808 00:22:06.764 03:24:49 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 85808 ']' 00:22:06.764 03:24:49 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 85808 00:22:06.764 03:24:49 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:22:06.764 03:24:49 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:06.764 03:24:49 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85808 00:22:06.764 killing process with pid 85808 00:22:06.764 Received shutdown signal, test time was about 1.000000 seconds 00:22:06.764 00:22:06.764 Latency(us) 00:22:06.764 [2024-10-09T03:24:50.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.764 [2024-10-09T03:24:50.067Z] =================================================================================================================== 00:22:06.764 [2024-10-09T03:24:50.067Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:06.764 03:24:49 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:06.764 03:24:49 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:06.764 03:24:49 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85808' 00:22:06.764 03:24:49 keyring_linux -- common/autotest_common.sh@969 -- # kill 85808 00:22:06.764 03:24:49 keyring_linux -- common/autotest_common.sh@974 -- # wait 85808 00:22:07.024 03:24:50 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85790 00:22:07.024 03:24:50 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 85790 ']' 00:22:07.024 03:24:50 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 85790 00:22:07.024 03:24:50 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:22:07.024 03:24:50 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:07.024 03:24:50 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85790 00:22:07.024 killing process with pid 85790 00:22:07.024 03:24:50 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:07.024 03:24:50 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:07.024 03:24:50 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85790' 00:22:07.024 03:24:50 keyring_linux -- common/autotest_common.sh@969 -- # kill 85790 00:22:07.024 03:24:50 keyring_linux -- common/autotest_common.sh@974 -- # wait 85790 00:22:07.592 ************************************ 00:22:07.592 END TEST keyring_linux 00:22:07.592 ************************************ 00:22:07.592 00:22:07.592 real 0m6.728s 00:22:07.592 user 0m12.698s 00:22:07.592 sys 0m1.755s 00:22:07.592 03:24:50 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:07.592 03:24:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:07.592 03:24:50 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:22:07.592 03:24:50 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:22:07.592 03:24:50 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:22:07.592 03:24:50 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:22:07.592 03:24:50 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:22:07.592 03:24:50 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:22:07.592 03:24:50 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:22:07.592 03:24:50 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:22:07.592 03:24:50 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:22:07.592 03:24:50 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:22:07.592 03:24:50 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:22:07.592 03:24:50 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:22:07.592 03:24:50 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:22:07.592 03:24:50 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:22:07.592 03:24:50 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:22:07.592 03:24:50 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:22:07.592 03:24:50 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:22:07.592 03:24:50 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:07.592 03:24:50 -- common/autotest_common.sh@10 -- # set +x 00:22:07.592 03:24:50 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:22:07.592 03:24:50 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:22:07.592 03:24:50 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:22:07.592 03:24:50 -- common/autotest_common.sh@10 -- # set +x 00:22:09.497 INFO: APP EXITING 00:22:09.497 INFO: killing all VMs 00:22:09.497 INFO: killing vhost app 00:22:09.497 INFO: EXIT DONE 00:22:09.756 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:10.015 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:22:10.015 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:22:10.582 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:10.582 Cleaning 00:22:10.582 Removing: /var/run/dpdk/spdk0/config 00:22:10.582 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:22:10.582 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:22:10.582 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:22:10.582 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:22:10.582 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:22:10.582 Removing: /var/run/dpdk/spdk0/hugepage_info 00:22:10.582 Removing: /var/run/dpdk/spdk1/config 00:22:10.582 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:22:10.582 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:22:10.582 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:22:10.582 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:22:10.582 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:22:10.582 Removing: /var/run/dpdk/spdk1/hugepage_info 00:22:10.582 Removing: /var/run/dpdk/spdk2/config 00:22:10.582 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:22:10.582 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:22:10.582 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:22:10.582 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:22:10.582 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:22:10.582 Removing: /var/run/dpdk/spdk2/hugepage_info 00:22:10.582 Removing: /var/run/dpdk/spdk3/config 00:22:10.582 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:22:10.582 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:22:10.582 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:22:10.583 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:22:10.583 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:22:10.583 Removing: /var/run/dpdk/spdk3/hugepage_info 00:22:10.583 Removing: /var/run/dpdk/spdk4/config 00:22:10.583 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:22:10.583 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:22:10.583 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:22:10.583 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:22:10.583 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:22:10.583 Removing: /var/run/dpdk/spdk4/hugepage_info 00:22:10.583 Removing: /dev/shm/nvmf_trace.0 00:22:10.583 Removing: /dev/shm/spdk_tgt_trace.pid56683 00:22:10.583 Removing: /var/run/dpdk/spdk0 00:22:10.583 Removing: /var/run/dpdk/spdk1 00:22:10.842 Removing: /var/run/dpdk/spdk2 00:22:10.842 Removing: /var/run/dpdk/spdk3 00:22:10.842 Removing: /var/run/dpdk/spdk4 00:22:10.842 Removing: /var/run/dpdk/spdk_pid56525 00:22:10.842 Removing: /var/run/dpdk/spdk_pid56683 00:22:10.842 Removing: /var/run/dpdk/spdk_pid56876 00:22:10.842 Removing: /var/run/dpdk/spdk_pid56963 00:22:10.842 Removing: /var/run/dpdk/spdk_pid56988 00:22:10.842 Removing: /var/run/dpdk/spdk_pid57098 00:22:10.842 Removing: /var/run/dpdk/spdk_pid57116 00:22:10.842 Removing: /var/run/dpdk/spdk_pid57255 00:22:10.842 Removing: /var/run/dpdk/spdk_pid57452 00:22:10.842 Removing: /var/run/dpdk/spdk_pid57610 00:22:10.842 Removing: /var/run/dpdk/spdk_pid57683 00:22:10.842 Removing: /var/run/dpdk/spdk_pid57767 00:22:10.842 Removing: /var/run/dpdk/spdk_pid57866 00:22:10.842 Removing: /var/run/dpdk/spdk_pid57951 00:22:10.842 Removing: /var/run/dpdk/spdk_pid57989 00:22:10.842 Removing: /var/run/dpdk/spdk_pid58025 00:22:10.842 Removing: /var/run/dpdk/spdk_pid58093 00:22:10.842 Removing: /var/run/dpdk/spdk_pid58205 00:22:10.842 Removing: /var/run/dpdk/spdk_pid58638 00:22:10.842 Removing: /var/run/dpdk/spdk_pid58691 00:22:10.842 Removing: /var/run/dpdk/spdk_pid58741 00:22:10.842 Removing: /var/run/dpdk/spdk_pid58757 00:22:10.842 Removing: /var/run/dpdk/spdk_pid58830 00:22:10.842 Removing: /var/run/dpdk/spdk_pid58838 00:22:10.842 Removing: /var/run/dpdk/spdk_pid58905 00:22:10.842 Removing: /var/run/dpdk/spdk_pid58921 00:22:10.842 Removing: /var/run/dpdk/spdk_pid58971 00:22:10.842 Removing: /var/run/dpdk/spdk_pid58990 00:22:10.842 Removing: /var/run/dpdk/spdk_pid59030 00:22:10.842 Removing: /var/run/dpdk/spdk_pid59048 00:22:10.842 Removing: /var/run/dpdk/spdk_pid59190 00:22:10.842 Removing: /var/run/dpdk/spdk_pid59225 00:22:10.842 Removing: /var/run/dpdk/spdk_pid59308 00:22:10.842 Removing: /var/run/dpdk/spdk_pid59647 00:22:10.842 Removing: /var/run/dpdk/spdk_pid59659 00:22:10.842 Removing: /var/run/dpdk/spdk_pid59696 00:22:10.842 Removing: /var/run/dpdk/spdk_pid59715 00:22:10.842 Removing: /var/run/dpdk/spdk_pid59736 00:22:10.842 Removing: /var/run/dpdk/spdk_pid59755 00:22:10.842 Removing: /var/run/dpdk/spdk_pid59774 00:22:10.842 Removing: /var/run/dpdk/spdk_pid59788 00:22:10.842 Removing: /var/run/dpdk/spdk_pid59814 00:22:10.842 Removing: /var/run/dpdk/spdk_pid59822 00:22:10.842 Removing: /var/run/dpdk/spdk_pid59843 00:22:10.842 Removing: /var/run/dpdk/spdk_pid59862 00:22:10.842 Removing: /var/run/dpdk/spdk_pid59881 00:22:10.842 Removing: /var/run/dpdk/spdk_pid59891 00:22:10.842 Removing: /var/run/dpdk/spdk_pid59922 00:22:10.842 Removing: /var/run/dpdk/spdk_pid59931 00:22:10.842 Removing: /var/run/dpdk/spdk_pid59952 00:22:10.842 Removing: /var/run/dpdk/spdk_pid59971 00:22:10.842 Removing: /var/run/dpdk/spdk_pid59991 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60001 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60037 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60056 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60090 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60162 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60186 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60201 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60235 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60241 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60254 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60302 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60316 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60344 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60359 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60369 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60378 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60393 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60402 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60412 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60422 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60456 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60482 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60497 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60526 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60535 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60543 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60589 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60600 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60627 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60640 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60646 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60655 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60662 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60670 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60677 00:22:10.842 Removing: /var/run/dpdk/spdk_pid60685 00:22:11.101 Removing: /var/run/dpdk/spdk_pid60767 00:22:11.101 Removing: /var/run/dpdk/spdk_pid60814 00:22:11.101 Removing: /var/run/dpdk/spdk_pid60927 00:22:11.101 Removing: /var/run/dpdk/spdk_pid60966 00:22:11.101 Removing: /var/run/dpdk/spdk_pid61011 00:22:11.101 Removing: /var/run/dpdk/spdk_pid61031 00:22:11.101 Removing: /var/run/dpdk/spdk_pid61044 00:22:11.101 Removing: /var/run/dpdk/spdk_pid61064 00:22:11.101 Removing: /var/run/dpdk/spdk_pid61101 00:22:11.101 Removing: /var/run/dpdk/spdk_pid61122 00:22:11.101 Removing: /var/run/dpdk/spdk_pid61200 00:22:11.102 Removing: /var/run/dpdk/spdk_pid61216 00:22:11.102 Removing: /var/run/dpdk/spdk_pid61260 00:22:11.102 Removing: /var/run/dpdk/spdk_pid61325 00:22:11.102 Removing: /var/run/dpdk/spdk_pid61387 00:22:11.102 Removing: /var/run/dpdk/spdk_pid61418 00:22:11.102 Removing: /var/run/dpdk/spdk_pid61517 00:22:11.102 Removing: /var/run/dpdk/spdk_pid61560 00:22:11.102 Removing: /var/run/dpdk/spdk_pid61598 00:22:11.102 Removing: /var/run/dpdk/spdk_pid61824 00:22:11.102 Removing: /var/run/dpdk/spdk_pid61922 00:22:11.102 Removing: /var/run/dpdk/spdk_pid61956 00:22:11.102 Removing: /var/run/dpdk/spdk_pid61980 00:22:11.102 Removing: /var/run/dpdk/spdk_pid62019 00:22:11.102 Removing: /var/run/dpdk/spdk_pid62058 00:22:11.102 Removing: /var/run/dpdk/spdk_pid62091 00:22:11.102 Removing: /var/run/dpdk/spdk_pid62123 00:22:11.102 Removing: /var/run/dpdk/spdk_pid62522 00:22:11.102 Removing: /var/run/dpdk/spdk_pid62566 00:22:11.102 Removing: /var/run/dpdk/spdk_pid62918 00:22:11.102 Removing: /var/run/dpdk/spdk_pid63386 00:22:11.102 Removing: /var/run/dpdk/spdk_pid63664 00:22:11.102 Removing: /var/run/dpdk/spdk_pid64563 00:22:11.102 Removing: /var/run/dpdk/spdk_pid65496 00:22:11.102 Removing: /var/run/dpdk/spdk_pid65618 00:22:11.102 Removing: /var/run/dpdk/spdk_pid65686 00:22:11.102 Removing: /var/run/dpdk/spdk_pid67113 00:22:11.102 Removing: /var/run/dpdk/spdk_pid67432 00:22:11.102 Removing: /var/run/dpdk/spdk_pid71150 00:22:11.102 Removing: /var/run/dpdk/spdk_pid71517 00:22:11.102 Removing: /var/run/dpdk/spdk_pid71626 00:22:11.102 Removing: /var/run/dpdk/spdk_pid71753 00:22:11.102 Removing: /var/run/dpdk/spdk_pid71787 00:22:11.102 Removing: /var/run/dpdk/spdk_pid71820 00:22:11.102 Removing: /var/run/dpdk/spdk_pid71850 00:22:11.102 Removing: /var/run/dpdk/spdk_pid71955 00:22:11.102 Removing: /var/run/dpdk/spdk_pid72091 00:22:11.102 Removing: /var/run/dpdk/spdk_pid72260 00:22:11.102 Removing: /var/run/dpdk/spdk_pid72353 00:22:11.102 Removing: /var/run/dpdk/spdk_pid72540 00:22:11.102 Removing: /var/run/dpdk/spdk_pid72615 00:22:11.102 Removing: /var/run/dpdk/spdk_pid72709 00:22:11.102 Removing: /var/run/dpdk/spdk_pid73071 00:22:11.102 Removing: /var/run/dpdk/spdk_pid73494 00:22:11.102 Removing: /var/run/dpdk/spdk_pid73495 00:22:11.102 Removing: /var/run/dpdk/spdk_pid73496 00:22:11.102 Removing: /var/run/dpdk/spdk_pid73769 00:22:11.102 Removing: /var/run/dpdk/spdk_pid74096 00:22:11.102 Removing: /var/run/dpdk/spdk_pid74102 00:22:11.102 Removing: /var/run/dpdk/spdk_pid74430 00:22:11.102 Removing: /var/run/dpdk/spdk_pid74450 00:22:11.102 Removing: /var/run/dpdk/spdk_pid74464 00:22:11.102 Removing: /var/run/dpdk/spdk_pid74495 00:22:11.102 Removing: /var/run/dpdk/spdk_pid74507 00:22:11.102 Removing: /var/run/dpdk/spdk_pid74869 00:22:11.102 Removing: /var/run/dpdk/spdk_pid74912 00:22:11.102 Removing: /var/run/dpdk/spdk_pid75244 00:22:11.102 Removing: /var/run/dpdk/spdk_pid75447 00:22:11.102 Removing: /var/run/dpdk/spdk_pid75901 00:22:11.102 Removing: /var/run/dpdk/spdk_pid76470 00:22:11.102 Removing: /var/run/dpdk/spdk_pid77360 00:22:11.102 Removing: /var/run/dpdk/spdk_pid78003 00:22:11.102 Removing: /var/run/dpdk/spdk_pid78005 00:22:11.102 Removing: /var/run/dpdk/spdk_pid80060 00:22:11.102 Removing: /var/run/dpdk/spdk_pid80122 00:22:11.102 Removing: /var/run/dpdk/spdk_pid80183 00:22:11.102 Removing: /var/run/dpdk/spdk_pid80236 00:22:11.102 Removing: /var/run/dpdk/spdk_pid80357 00:22:11.102 Removing: /var/run/dpdk/spdk_pid80417 00:22:11.102 Removing: /var/run/dpdk/spdk_pid80476 00:22:11.102 Removing: /var/run/dpdk/spdk_pid80532 00:22:11.102 Removing: /var/run/dpdk/spdk_pid80916 00:22:11.102 Removing: /var/run/dpdk/spdk_pid82127 00:22:11.102 Removing: /var/run/dpdk/spdk_pid82275 00:22:11.102 Removing: /var/run/dpdk/spdk_pid82522 00:22:11.102 Removing: /var/run/dpdk/spdk_pid83131 00:22:11.102 Removing: /var/run/dpdk/spdk_pid83292 00:22:11.102 Removing: /var/run/dpdk/spdk_pid83449 00:22:11.102 Removing: /var/run/dpdk/spdk_pid83545 00:22:11.102 Removing: /var/run/dpdk/spdk_pid83700 00:22:11.102 Removing: /var/run/dpdk/spdk_pid83810 00:22:11.361 Removing: /var/run/dpdk/spdk_pid84529 00:22:11.361 Removing: /var/run/dpdk/spdk_pid84564 00:22:11.361 Removing: /var/run/dpdk/spdk_pid84604 00:22:11.361 Removing: /var/run/dpdk/spdk_pid84859 00:22:11.361 Removing: /var/run/dpdk/spdk_pid84890 00:22:11.361 Removing: /var/run/dpdk/spdk_pid84924 00:22:11.361 Removing: /var/run/dpdk/spdk_pid85394 00:22:11.361 Removing: /var/run/dpdk/spdk_pid85411 00:22:11.361 Removing: /var/run/dpdk/spdk_pid85663 00:22:11.361 Removing: /var/run/dpdk/spdk_pid85790 00:22:11.361 Removing: /var/run/dpdk/spdk_pid85808 00:22:11.361 Clean 00:22:11.361 03:24:54 -- common/autotest_common.sh@1451 -- # return 0 00:22:11.361 03:24:54 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:22:11.361 03:24:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:11.361 03:24:54 -- common/autotest_common.sh@10 -- # set +x 00:22:11.361 03:24:54 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:22:11.361 03:24:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:11.361 03:24:54 -- common/autotest_common.sh@10 -- # set +x 00:22:11.361 03:24:54 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:11.361 03:24:54 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:22:11.361 03:24:54 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:22:11.361 03:24:54 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:22:11.361 03:24:54 -- spdk/autotest.sh@394 -- # hostname 00:22:11.361 03:24:54 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:22:11.620 geninfo: WARNING: invalid characters removed from testname! 00:22:33.551 03:25:15 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:36.084 03:25:19 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:37.987 03:25:21 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:40.519 03:25:23 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:42.421 03:25:25 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:44.951 03:25:27 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:47.484 03:25:30 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:47.484 03:25:30 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:22:47.484 03:25:30 -- common/autotest_common.sh@1681 -- $ lcov --version 00:22:47.484 03:25:30 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:22:47.484 03:25:30 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:22:47.484 03:25:30 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:22:47.484 03:25:30 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:22:47.484 03:25:30 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:22:47.484 03:25:30 -- scripts/common.sh@336 -- $ IFS=.-: 00:22:47.484 03:25:30 -- scripts/common.sh@336 -- $ read -ra ver1 00:22:47.484 03:25:30 -- scripts/common.sh@337 -- $ IFS=.-: 00:22:47.484 03:25:30 -- scripts/common.sh@337 -- $ read -ra ver2 00:22:47.484 03:25:30 -- scripts/common.sh@338 -- $ local 'op=<' 00:22:47.484 03:25:30 -- scripts/common.sh@340 -- $ ver1_l=2 00:22:47.484 03:25:30 -- scripts/common.sh@341 -- $ ver2_l=1 00:22:47.484 03:25:30 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:22:47.484 03:25:30 -- scripts/common.sh@344 -- $ case "$op" in 00:22:47.484 03:25:30 -- scripts/common.sh@345 -- $ : 1 00:22:47.484 03:25:30 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:22:47.484 03:25:30 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:47.484 03:25:30 -- scripts/common.sh@365 -- $ decimal 1 00:22:47.484 03:25:30 -- scripts/common.sh@353 -- $ local d=1 00:22:47.484 03:25:30 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:22:47.484 03:25:30 -- scripts/common.sh@355 -- $ echo 1 00:22:47.484 03:25:30 -- scripts/common.sh@365 -- $ ver1[v]=1 00:22:47.484 03:25:30 -- scripts/common.sh@366 -- $ decimal 2 00:22:47.484 03:25:30 -- scripts/common.sh@353 -- $ local d=2 00:22:47.484 03:25:30 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:22:47.484 03:25:30 -- scripts/common.sh@355 -- $ echo 2 00:22:47.484 03:25:30 -- scripts/common.sh@366 -- $ ver2[v]=2 00:22:47.484 03:25:30 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:22:47.484 03:25:30 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:22:47.484 03:25:30 -- scripts/common.sh@368 -- $ return 0 00:22:47.484 03:25:30 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:47.484 03:25:30 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:22:47.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.484 --rc genhtml_branch_coverage=1 00:22:47.484 --rc genhtml_function_coverage=1 00:22:47.484 --rc genhtml_legend=1 00:22:47.484 --rc geninfo_all_blocks=1 00:22:47.484 --rc geninfo_unexecuted_blocks=1 00:22:47.484 00:22:47.484 ' 00:22:47.484 03:25:30 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:22:47.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.484 --rc genhtml_branch_coverage=1 00:22:47.484 --rc genhtml_function_coverage=1 00:22:47.484 --rc genhtml_legend=1 00:22:47.484 --rc geninfo_all_blocks=1 00:22:47.484 --rc geninfo_unexecuted_blocks=1 00:22:47.484 00:22:47.484 ' 00:22:47.484 03:25:30 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:22:47.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.484 --rc genhtml_branch_coverage=1 00:22:47.484 --rc genhtml_function_coverage=1 00:22:47.484 --rc genhtml_legend=1 00:22:47.484 --rc geninfo_all_blocks=1 00:22:47.484 --rc geninfo_unexecuted_blocks=1 00:22:47.484 00:22:47.484 ' 00:22:47.484 03:25:30 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:22:47.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.484 --rc genhtml_branch_coverage=1 00:22:47.484 --rc genhtml_function_coverage=1 00:22:47.484 --rc genhtml_legend=1 00:22:47.484 --rc geninfo_all_blocks=1 00:22:47.484 --rc geninfo_unexecuted_blocks=1 00:22:47.484 00:22:47.484 ' 00:22:47.484 03:25:30 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:47.484 03:25:30 -- scripts/common.sh@15 -- $ shopt -s extglob 00:22:47.484 03:25:30 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:22:47.484 03:25:30 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:47.484 03:25:30 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:47.484 03:25:30 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.484 03:25:30 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.484 03:25:30 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.484 03:25:30 -- paths/export.sh@5 -- $ export PATH 00:22:47.484 03:25:30 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.484 03:25:30 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:22:47.484 03:25:30 -- common/autobuild_common.sh@486 -- $ date +%s 00:22:47.484 03:25:30 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728444330.XXXXXX 00:22:47.484 03:25:30 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728444330.OWiKbB 00:22:47.484 03:25:30 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:22:47.484 03:25:30 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:22:47.484 03:25:30 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:22:47.484 03:25:30 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:22:47.484 03:25:30 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:22:47.484 03:25:30 -- common/autobuild_common.sh@502 -- $ get_config_params 00:22:47.484 03:25:30 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:22:47.484 03:25:30 -- common/autotest_common.sh@10 -- $ set +x 00:22:47.484 03:25:30 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:22:47.484 03:25:30 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:22:47.484 03:25:30 -- pm/common@17 -- $ local monitor 00:22:47.484 03:25:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:47.484 03:25:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:47.484 03:25:30 -- pm/common@25 -- $ sleep 1 00:22:47.484 03:25:30 -- pm/common@21 -- $ date +%s 00:22:47.484 03:25:30 -- pm/common@21 -- $ date +%s 00:22:47.484 03:25:30 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728444330 00:22:47.484 03:25:30 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728444330 00:22:47.484 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728444330_collect-cpu-load.pm.log 00:22:47.484 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728444330_collect-vmstat.pm.log 00:22:48.421 03:25:31 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:22:48.421 03:25:31 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:22:48.421 03:25:31 -- spdk/autopackage.sh@14 -- $ timing_finish 00:22:48.421 03:25:31 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:48.421 03:25:31 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:22:48.421 03:25:31 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:48.421 03:25:31 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:22:48.421 03:25:31 -- pm/common@29 -- $ signal_monitor_resources TERM 00:22:48.421 03:25:31 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:22:48.421 03:25:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:48.421 03:25:31 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:22:48.421 03:25:31 -- pm/common@44 -- $ pid=87557 00:22:48.421 03:25:31 -- pm/common@50 -- $ kill -TERM 87557 00:22:48.421 03:25:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:48.421 03:25:31 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:22:48.421 03:25:31 -- pm/common@44 -- $ pid=87559 00:22:48.421 03:25:31 -- pm/common@50 -- $ kill -TERM 87559 00:22:48.421 + [[ -n 5202 ]] 00:22:48.421 + sudo kill 5202 00:22:48.430 [Pipeline] } 00:22:48.445 [Pipeline] // timeout 00:22:48.450 [Pipeline] } 00:22:48.464 [Pipeline] // stage 00:22:48.469 [Pipeline] } 00:22:48.483 [Pipeline] // catchError 00:22:48.492 [Pipeline] stage 00:22:48.494 [Pipeline] { (Stop VM) 00:22:48.506 [Pipeline] sh 00:22:48.786 + vagrant halt 00:22:52.071 ==> default: Halting domain... 00:22:57.352 [Pipeline] sh 00:22:57.631 + vagrant destroy -f 00:23:00.164 ==> default: Removing domain... 00:23:00.434 [Pipeline] sh 00:23:00.714 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/output 00:23:00.723 [Pipeline] } 00:23:00.737 [Pipeline] // stage 00:23:00.742 [Pipeline] } 00:23:00.756 [Pipeline] // dir 00:23:00.762 [Pipeline] } 00:23:00.776 [Pipeline] // wrap 00:23:00.783 [Pipeline] } 00:23:00.796 [Pipeline] // catchError 00:23:00.805 [Pipeline] stage 00:23:00.807 [Pipeline] { (Epilogue) 00:23:00.821 [Pipeline] sh 00:23:01.103 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:06.383 [Pipeline] catchError 00:23:06.385 [Pipeline] { 00:23:06.397 [Pipeline] sh 00:23:06.725 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:06.725 Artifacts sizes are good 00:23:06.769 [Pipeline] } 00:23:06.783 [Pipeline] // catchError 00:23:06.793 [Pipeline] archiveArtifacts 00:23:06.800 Archiving artifacts 00:23:06.923 [Pipeline] cleanWs 00:23:06.934 [WS-CLEANUP] Deleting project workspace... 00:23:06.934 [WS-CLEANUP] Deferred wipeout is used... 00:23:06.940 [WS-CLEANUP] done 00:23:06.942 [Pipeline] } 00:23:06.957 [Pipeline] // stage 00:23:06.962 [Pipeline] } 00:23:06.976 [Pipeline] // node 00:23:06.982 [Pipeline] End of Pipeline 00:23:07.025 Finished: SUCCESS